In the analysis section for the standard PDAL workshop, the thinning and ground detection exercises use chained PDAL commands to execute tasks.
Pipelines which execute the same tasks are given below. We'll figure out how to add them to the standard materials before the next PDAL workshop is delivered!
The Poisson dart throwing filter is described here:
The command line:
pdal translate \
./exercises/analysis/density/uncompahgre.laz ^\
./exercises/analysis/thinning/uncompahgre-thin.laz \
sample \
--filters.sample.radius=20
...can be achieved with the pipeline:
{
"pipeline":[
"file-input.las",
{
"type":"filters.sample",
"radius": 20
},
{
"type":"writers.las",
"filename":"file-sampled.las"
}
]
}
...saved as 'poissonsample.json' and the directive:
pdal pipeline poissonsample.json \
--readers.las.filename="./exercises/analysis/density/uncompahgre.laz" \
--writers.las.filename="./exercises/analysis/thinning/uncompahgre-thin.laz"
Alternately, if you replace file-input.las
and file-sampled.las
with real paths in the pipeline JSON, just run:
pdal pipeline poissonsample.json
You can also try changing filter parameters at run time:
pdal pipeline poissonsample.json --filters.sample.radius=50
...has the same effect as changing your pipline JSON block to:
{
"type":"filters.sample",
"radius": 50
},
The voxelgrid filter is described here:
The decimation filter is described here:
Try first randomising (or sorting) then decimating - then reversing filter order and see how it affects your result.
A simple decimation filter which picks every 20th point in an input file looks like:
{
"pipeline":[
"file-input.las",
{
"type":"filters.decimation",
"step": 20
},
{
"type":"writers.las",
"filename":"file-sampled.las"
}
]
}
...but might result in a pretty ugly point cloud. Adding a filters.randomise
step helps to spread the results out. This next pipeline first randomises the order of points in the input data, then picks every 20th point:
{
"pipeline":[
"file-input.las",
{
"type": "filters.randomize"
},
{
"type":"filters.decimation",
"step": 20
},
{
"type":"writers.las",
"filename":"file-sampled.las"
}
]
}
Finally, try using filters.mortonorder
(described here: ) in place of filters.randomise
and see how you go.
Finding ground points requires a couple of operations - first, reducing noise, then finding ground.
note you'll need to tune the parameters of your noise and ground finding filters for your own data - this pipeline shows the basic steps:
{
"pipeline":[
"file-input.las",
{
"type": "filters.outlier",
"mean_k": 8,
"multiplier": 3
},
{
"type":"filters.smrf",
"ignore": "Classification[7:7]"
},
{
"type":"filters.range",
"limits": "Classification[2:2]"
},
{
"type":"writers.las",
"filename":"file-sampled.las"
}
]
}
Dropping the final filters.range
block will return all the points - with 'ground' points assigned class 2 (ASPRS LAS specification for tagging ground points) and noise points assigned class 7 (ASPRS LAS specification for low noise)
In the workshop, cropping points with a polygon is a complex exercise. It uses filters.overlay
to assign a class to points which sit inside a polygon, then filters the dataset to return only those points. This can be handy if you have lots of polygons in, say, a shapefile or geopackage to clip points from.
The downside is that all the points you clip are assigned a single class; or have some other dimension set to a single value corresponding to the id of the polygon used to clip the points.
If we want to operate more directly, we can use filters.crop
. This is less flexible, and currently minorly painful since we need to convert polygons to WKT (Well Known Text) - but you can keep all your dimensions intact.
Takign the workshop example, we'll convert the polygon around the stadium to WKT. In the attributes.json
file this is the fourth line:
{ "type": "Feature", "properties": { "id": 2, "cls": 6 }, "geometry": { "type": "Polygon", "coordinates": ... } },
...which, as WKT, looks like:
"POLYGON ((-123.068875586721 44.0592277937513,...,-123.068875586721 44.0592277937513))"
...and construct a pipeline (this includes the full WKT polygon). We need to tell PDAL that the WKT polygon is not in the same CRS as the point cloud, we set the polygon CRS using a_srs
, and PDAL infers a transformation from the polygon to the point cloud CRS.
{
"pipeline":[
"file-input.las",
{
"type":"filters.crop",
"polygon": "POLYGON ((-123.068875586721 44.0592277937513,-123.068425255932 44.0593017731328,-123.067936238336 44.0592489909417,-123.067495909125 44.059043544671,-123.067101851709 44.0586296501919,-123.067099413306 44.0580633355964,-123.067333190515 44.0575874173305,-123.067852567837 44.0573204352561,-123.068290450314 44.0572148305385,-123.068899280712 44.0572250214092,-123.069338909397 44.0574145238262,-123.069655618141 44.0578301571477,-123.069799646794 44.0580901477556,-123.069828689995 44.0585042983316,-123.069657775977 44.0588990323626,-123.068875586721 44.0592277937513))",
"a_srs" : "EPSG:4326"
},
{
"type":"writers.las",
"filename":"file-cropped.las"
}
]
}
...which can be run like:
pdal pipeline cropfilterdemo.json --readers.las.filename="/path/to/autzen.laz" --writers.las.filename="thestadium.las"
At the time I wrote this, using WKT multipolygons to clip multiple geometries at once does not work, you'll need to iterate through your polygon set and clip one at a time.
note we can also pass in geometries in the pipeline call using --filters.crop.polygon
and adjust the CRS using --filters.crop.a_srs
.
Have fun!