Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
#! /usr/bin/env python
import pyart
import numpy as np
from pyart.testing.sample_objects import make_empty_ppi_radar, \
def pproc(LP_solver, proc=1):
""" Phase process using LP_solver and number of processors. """
# make a example radar to phase process
radar = make_empty_ppi_radar(983, 80, 1)
radar.range['data'] = 117.8784 + np.arange(983) * 119.91698
f = np.load(_EXAMPLE_RAYS_FILE)
for field_name in f:
fdata = f[field_name]
fdata = np.tile(fdata, (80, 1))
radar.fields[field_name] = {'data': fdata}
# phase processing
phidp, kdp = pyart.correct.phase_proc_lp(radar, 0.0, LP_solver=LP_solver,
if __name__ == '__main__':
import pstats, cProfile
cProfile.runctx("pproc('cvxopt')", globals(), locals(), "")
s = pstats.Stats("")
Copy link

jjhelmus commented Nov 22, 2013

Results from timing using IPython's %timeit command

%timeit lp_speed_test.pproc(args) 
args: timing               

'pyglpk': 33.4 s per loop
'cvxopt': 34.3 s per loop
'cylp': 570 ms per loop
'cylp_mp', 1: 605 ms per loop

The speed does not improve moving to more processors as the problem is too small.

Increasing the number of rays to 8000 (and commenting the print statement from the cylp_mp processing):

args: timing               

'cylp': 23.3 s per loop
'cylp_mp', 1: 23.3 s per loop
'cylp_mp', 2: 19.3 s per loop
'cylp_mp', 4: 17.3 s per loop
'cylp_mp', 8: 17.4 s per loop

Copy link

kmuehlbauer commented Nov 23, 2013

Please test also on plain command line (from shell) with no ipython or idle involved, because this will improve dramatically. At least "idle" tried to handle the separate processes obviously through another pipe or something, which added much overhead. I'am very interested in the outcome, because the speedup here was significant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment