I wanted to circulate an alternative API for addressing the issue of model specific optimizations in MLlib, https://issues.apache.org/jira/browse/SPARK-22126. This builds on top of the work by Weichen and the API he's proposed. The idea here is break up the description of how multiple model instances can be fit in parallel and the actual execution of this fitting. I propose that we add a new method called fitMultiple
(working title) which returns Array[Callable[Model[_]]]
. fitMultiple(dataset, paramMaps).map(callable => callable.call())
should be the same as paramMaps.map(pm => fit(dataset, pm))
, but could include some model specific performance optimizations. Callables returned by fitMultiple
should be designed to be thread safe so that they can be run in parallel (eg by CrossValidator).
We would provide a default implementation of fitMultiple(...)
so that even developers who write their own Estimators/Transformers would not need to be implement this method unless they wanted to include performance optimizations.
def fitMultiple(dataset: Dataset[_], paramMaps: Array[ParamMap]): Array[Callable[M]] = {
paramMaps.map { paramMap =>
new Callable[M] {
override def call(): M = fit(dataset, paramMap)
}
}
}
We would also update fit(dataset, paramMaps)
to use fitMultiple
so that it would be a wrapper around fitMultiple
which would benefit from any performance optimizations but would run the fitting synchronously in the current thread.
def fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): Seq[M] = {
fitMultiple(dataset, paramMaps).map{ _.call() }
}
What Weichen is proposing makes sense to me. I don't see how would you efficiently synchronize the Callables if they were piggy-backing on each other's results. And I think this would matter even with large thread pools since the number of dependent tasks is not bound and can be O(N) (if I understood Tim's comment correctly).