I wanted to circulate an alternative API for addressing the issue of model specific optimizations in MLlib, https://issues.apache.org/jira/browse/SPARK-22126. This builds on top of the work by Weichen and the API he's proposed. The idea here is break up the description of how multiple model instances can be fit in parallel and the actual execution of this fitting. I propose that we add a new method called fitMultiple
(working title) which returns Array[Callable[Model[_]]]
. fitMultiple(dataset, paramMaps).map(callable => callable.call())
should be the same as paramMaps.map(pm => fit(dataset, pm))
, but could include some model specific performance optimizations. Callables returned by fitMultiple
should be designed to be thread safe so that they can be run in parallel (eg by CrossValidator).
We would provide a default implementation of fitMultiple(...)
so that even developers who write their own Estimators/Transformers would not need to be implement this method unless they wanted to include performance optimizations.
def fitMultiple(dataset: Dataset[_], paramMaps: Array[ParamMap]): Array[Callable[M]] = {
paramMaps.map { paramMap =>
new Callable[M] {
override def call(): M = fit(dataset, paramMap)
}
}
}
We would also update fit(dataset, paramMaps)
to use fitMultiple
so that it would be a wrapper around fitMultiple
which would benefit from any performance optimizations but would run the fitting synchronously in the current thread.
def fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): Seq[M] = {
fitMultiple(dataset, paramMaps).map{ _.call() }
}
Making
Callables
piggy-backing on each other will bring more complexity, and it seems will bring risks of dead-lock (if scheduled in wrong order)? for example, the default implementation, callables are scheduled serially, if a certain callable piggy-backing on others are scheduled first, then dead-lock occurs ?