I’ve done some research and I think I have a solution for this. We’ll create a new bootstrap kind of service that the PuppetDB service will depend on. It will have a function that registers a module’s influence on the startup process of PuppetDB. Function herees that are not terrible are left as an exercise to the reader.
(register-startup-notification [this other-service])
(startup-completed [this other-service])
Interested services, such as the PuppetDB service and the sync
service will register in the init phase of the TK lifecycle. This
bootstrap service will assoc the service id of other-service
in an
atom in it’s context. Then in the start function of this new
bootstrap service, it will add a catch-all handler at “/” that will
always return an error response. Not sure if this should be a web
page or just an 500 error with some text, but will service all
requests and return the error.
Each service that registered itself will then call startup-completed as the last thing the start function does. This new bootstrap service will remove that handler when all of the registered services have called startup-completed.
We don’t currently have the ability to remove a jetty handler. I think this will be fairly easy as Jetty supports this and we have similar functions that add handlers using that low level Jetty API. The add-ring-handler function can be found here. That function returns the Jetty ring handler object that we will need to use to remove the ring handler (i.e. this new function will accept that handler as it’s arg). It would be nice if we could pass the route in (i.e. (remove-handler service “/”)) but Jetty doesn’t expose in their APIs without doing some potentially unsafe casting and guessing about what kind of handlers are present.
The bootstrap service can add the ring handler and create a function that closes over the object and uses that function to remove the handler once the last service has completed. Might be easiest to just add a watcher on the atom.
I'm gonna play Devil's advocate here.
Why not just use trapperkeeper service lifecycles in a clever way?
If we separated the
query-fn
andcommand-fn
services (the services which offer in-process command and querying) into their own services (call thempdb-in-proc-query
andpdb-in-proc-cmd
), and had separatepdb-query-handler
andpdb-command-handler
services which served our ring handlers in the appropriate places, we could have a service in the middlePuppetDBRegister
which relies on the in-process services being up (maybe the in-proc services satisfy astartup-complete
interface), the handler services would rely on thisPuppetDBRegister
function being started before they served our handlers. Then in extensions we could have a different implementation of that same service which also relied on the sync service, and the handler functions would be none-the-wiser.