Skip to content

Instantly share code, notes, and snippets.

What would you like to do?

Flexible perception pipeline manipulation for RoboSherlock (GSOC 2018)


This project enables Robosherlock to handle multiple paralleled annotators processing branches while reserving the original capability of Robosherlock in linear execution. It enhances Robosherlock's flexibility to handle concurrent or sequential processes. Hence, it improves overall execution performance, scalability, and better error handlings. The project was done to fulfill the requirements of Google Summer of Code 2018 with IAI, University of Bremen.

The project is merged with the branch master of Robosherlock. The API is simple to use, it is a flag parallel in ROS launch file that indicates the pipeline should be executed parallel or not. However, it requires new dependency KnowRob to compile and run correctly.

Pull request: [Link]


  • Implement RSParallelPipelinePlanner that can mark the execution orderings of annotators based on their required inputs and outputs.
  • Implement RSAggregatedAnalysisEngine that executes parallel or sequential pipeline regulated by flag parallel based on the execution model as RSParallelPipelinePlanner outputs.
  • Port RSAggregatedAnalysisEngine to old implementation of Robosherlock to reserve originality.

=> This project has successfully enhanced Robosherlock flexibility in annotator execution pipeline capability.


The PR implements these new features:

  • Plan parallel pipeline data structure from the annotator dependencies chain queried from KnowRob through the JsonProlog module.
  • Parallel execution model that can run annotators in parallel based on the parallel pipeline data structure.
  • Several failsafe mechanisms for better error handling when issues occur in parallel execution.


These ideas above were implemented in these modules:

  • DependencyQuery: an interface with KnowRob that queries annotator dependency from Knowrob
  • DirectedGraph: a data structure that holds annotator dependencies chain for planning annotator orderings
  • AnnotatorDependencyPlanner: a function that traverses DirectedGraph to plan annotator orderings based on my designed algorithm
  • RSParallelPipelinePlanner: a data structure that incorporates annotator dependency querying and annotator ordering planning
  • RSAggregatedAnalysisEngine: a data structure that is inherited from uima::AnalysisEngine and implemented a parallel execution model


The flow of flexible pipeline enhancement is shown as below:

flexiblepipeline 1

As mentioned above, RSAggregatedAnalysisEngine is inherited from uima::AnalysisEngine and ported to RSControledAnalysisEngine to reserve a sequential pipeline process while extending its capability to execution pipeline in parallel. Whenever init() or applyNextPipeline() function of RSControledAnalysisEngine is called with a list of sequential annotators, the flow of this diagram is invoked.

First, RSPipelineManager will call retrieveAnnotatorsInputOutput() and planPipelineStructure() function of RSParallelPipelinePlanner to plan annotator orderings for parallel execution. planPipelineStructure() plans annotator orderings based on algorithm below:

G - dependency graph
M - mapping between nodeID and longest path
S  - list contains current nodes that have no parents

   insert no-parent nodes from G to S
   while True:
         while S is not empty:
             n = S.pop()
             for each node m has edge e from n to m:
                  remove edge e from graph G
                  if M[m] < M[n] + 1:
                       M[m] = M[n] + 1
             remove node n in G
          insert no-parent nodes from G to S
          if S is empty:
             if there are vertice in G:
                  warn that graph G has dependency loop
                  return false
          return true

Next, the flag parallel_ will decide to call parallelProcess() for parallel execution or process() for sequential execution in RSAggregatedAnalysisEngine. If parallel_ = true, the annotator orderings data structure is input to the function, the parallelProcess() function will execution annotator pipeline in parallel based on the data structure. To be more comprehending, the parallel execution model of parallelProcess() is described below:

L - list of orderings
function parallelOrderingProcess
for each ordering O in list L:
  for each annotator Ai in O:
    Ti = start thread calling PrimitiveEngine process (return error Ti);
  lock until all Ti of O has been returned;

  for each Ti in O:
    if Ti != UIMA_ERROR_NONE:
      print stack traces;
      system exit with error;

The parallel execution model is possible by implementing with the help of std::async and std::promise to lock each ordering. Note that this model does not require CAS splitting or merging, all executions are processed directly on the base CAS with the help of mutexes. For example, considering the planned annotator orderings as picture:


The parallelProcess() function will process annotators in parallel in the same ordering. For instance, ordering 0 consists of only CollectionReader so it only runs CollectionReader, ordering 3 consists of PlaneAnnotator and NormalEstimator so it will execute these annotators in parallel and wait for them to complete before advancing to the next ordering. The computation time for each ordering is the longest process time of annotators in the same ordering.


The project itself is an enhancement for the Robosherlock framework. Therefore, there are no additional dependencies to install other than the Robosherlock framework itself.

This section assumes the user already set up their ROS Kinetic workspace and installed KnowRob. If not, please follow this intruction for ROS and install KnowRob on branch Kinetic

To install RoboSherlock with parallel enhancement, please follow RoboSherlock installation.


  • Invoking the demo pipeline in parallel of RoboSherlock (remember to source your workspace before execute these commands and advertise your cloud over the topic /kinect_head/depth_registered/image_raw and /kinect_head/rgb/image_color):
roslaunch json_prolog json_prolog.launch initial_package:=robosherlock_knowrob&
roslaunch robosherlock rs.launch ae:=demo parallel:=true
  • Unit tests:
cd ~\<your_workspace>
catkin_make run_tests_robosherlock


The project successfully enhances RoboSherlock as the following points:

  • Increase the performance of any pipeline ranging from 15-150% depending on the computation requirement of the pipeline.
  • Scale up large multiple parallel annotators pipelines.
  • Better error handling in parallel execution by implementing several failsafe mechanisms.

Future works

There are some improvements consideration for the parallel mechanism to work reliably:

  • Implement failsafe in all annotators for better error handling in parallel execution (not crashing and tell nothing)
  • Scrub dependency chain of annotators of robosherlock_knowrob for the RSParallelPipelinePlanner to plan correct parallel pipeline structure

Further enhancements:

  • Scale the parallel execution model to multiple machines in a network.
  • Dynamically reconfigure parameters for each annotator based on the parallel execution model.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.