monorepo tooling is still NX, just way simplified.
.
├── README.md
├── apps
│ ├── api
│ │ └── Dockerfile
│ ├── worker
│ │ └── Dockerfile
│ ├── desktop
│ └── web
│ └── Dockerfile
├── db
│ ├── init-scripts
│ ├── migrations
│ └── seeds
├── libs
│ ├── common
│ ├── ui-common
│ ├── data
│ │ ├── generated
│ │ └── graphql
│ ├── handlers
│ │ ├── enrichRawOrderEventHandler
│ │ ├── orderEventHandler
│ │ └── routeEventHandler
│ └── models
├── deployment
└── tools
ALL microservices will use the SAME application, but with different config/parameters given at runtime to select the desired MODULE that will do the actual biz work
eg: we have a nest app called WORKER
in cluster, we have say, 5 different DEPLOYMENTS that all use the same docker image: valstro/worker
but 5 different commands
node worker.js --topic order-events-dev --handler orchestrate-order-events
node worker.js --topic order-route-requests-dev --handler route-orders-handler
node worker.js --topic incoming-raw-orders --handler enrich-and-create-orders-handler
this does a bunch of good things for our monorepo and dev experience:
- devs don't have to think about the mechanics of running a service, just the biz logic.
- all transport layer code lives in the WORKER. only biz logic in HANDLERS modulespotentially the WORKER can maybe even be owned by platform in the beginning.
- deployments are way faster and don't grow linearly with number of WORKER permutations, always N HANDLER MODULES = 1 docker build
- best-practices are free for anyone building a service
- HANDLER MODULES can even be developed externally from monorepo if we really want to
- enable many topic :: many consumer pretty easily. just run node worker.js with same topic arg and different consumer args.
- no more env variable sprawl for different kafka consumers/services.
- user-facing api for ingestion, exports, etc.
- graphql api used for ALL data-access. no other application needs direct DB access. all apps use graphql to access database.
- all other API needs
- our react web app. doesn't have to change much
- our tauri app
- kafka consumer driver
- parameters determine kafka topic to listen to, and handler to execute on consume
- single kafka consumer application, can be run any number of times for any number of permutations of topic X handler.
- ALL transport-level logic lives here. consumer config, topic listen, batches, onMessage handler, offset handling, tracing, etc.
- database migrations
- database seeding scripts
- database init scripts
- code used in any other app or lib
- cross cutting modules like logging, etc.
- code used in web and desktop
- graphql queries and types
- generated typescript types from gql
- heavy models backed by xState machines where it makes sense (OrderModel)
- modules that actually know how to handle specific types of events and take a specific action (single-purpose)
- handlers are all driven by WORKER app given a specified kafka topic and handler name.
- single-purpose. one CATEGORY of event handled by each HANDLER
- one event CATEGORY could have multiple TYPES of events in the typescript sense (eg. polymorphism...
OrderRequestedEvent extends OrderEvent
, etc.)
- one event CATEGORY could have multiple TYPES of events in the typescript sense (eg. polymorphism...
- all have same api, eg.
handleEvent(event: T): Promise<void>
to interface with consumer logic in WORKER app - no TRANSPORT-level logic here. just biz logic for event handling.
- helm charts
- other resources needed for deployments