This list was made using ONDAS and AddressBase. Here's a rough giude to how it was made:
- Get ONSAD
- Get a CSV file with
UPRN,Postcode
, I got this from the AddressBase - Install
csvkit
- Remove all the columns in ONSAD that we don't care about (we only want the 'LAD' column):
cat ../ONSAD_JAN_2017/Data/*.csv | csvcut -c 1,3 > uprn_lad.csv
- We now have two CSV files, each with two columns. One
UPRN,postcode
the otherUPRN,LAD
. - Run
join.py
to join the two files in to a single file with 3 columns - Make a file with
postcode,lad
:cat joined.csv | csvcut -c 3,2 > postcode_lad.csv
- Filter out duplicate rows:
sort -u postcode_lad.csv> unique_rows.csv
We now have a list of each unique (postcode,lad) pair. We need to count the duplicate postcodes, as this means a postcode is split:
csvcut -c 1 | uniq -c | sort -c | grep -v "$1"
Because I used AddressBase, and that's a closed database (although it clearly should be open), you might end up in some licencing problems if you were to actually use this in the real world.
I wholehartedly encurrage you to use this in the real world!
I'm assigning a "Bring it" licence to this work. The "Bring it" licence is the same as CC0
with the addition of being required to publish legal takedowns about you using it and notifying me of their publication URL so I can collect them here.