Some notes and thoughts from our QGIS workshop
- Tony made a QGIS project for Doncaster's data
- The QGIS project file is XML, which looks very easy for us to template / produce
- We could make a QGIS project file for each LPA containing the data sources, alongside our processed data (filtered for the LPA)
- We can organise these sources in the project in different ways, including hierachy
- Tony showed using our tile server as an API which allows seeing the features
- We should consider documenting our support for people using this as an API
- The tile server provides our properties against each shape
- but QGIS splits features when they overlap tile boundaries
- Tony prefers WFS than the vector tile service. Paul is scared of its large attack surface
- Maybe we could spike a very minimal, constrained implementation of the WFS API? (we can use QGIS console to trace the calls it makes)
- CSV data is slow to load, but can be indexed
- The project file can include styles (colour of outlines and fills, hashing, and transparency) which we could match our national map
- We could consider a design language for our colours (hard if we want to group by theme but contrast the same data from different sources)
- Paul pointed at https://www.placemark.io/features/creating-maps-from-spreadsheets for making maps from spreadsheets
- Tony showed other possibilities, filtering and colouring
- We looked at overlap analysis by building an index. Tony splits layers by the organisation-entity
- QGIS people can do a lot of work including ETL and transforming the data
We could spike making a project file, and research the audience for QGIS which could be analysts as well as people in LPAs, and people providing training such as Alasdair Rae.
I'm using overlap analysis to select a set of features in a local authority (e.g., conservation areas). I can then split that based on organisation entity (or / name / reference patterns) to identify candidates for deduplication and targets.
I save these as two separate layers, import into Postgres and use sql queries to create the candidate -> target mappings for the dedupe processing.