Skip to content

Instantly share code, notes, and snippets.

@susanli2016
Created February 9, 2017 10:19
Show Gist options
  • Save susanli2016/22cec8910b6a15b3f79e7b35cbe478a0 to your computer and use it in GitHub Desktop.
Save susanli2016/22cec8910b6a15b3f79e7b35cbe478a0 to your computer and use it in GitHub Desktop.
Wrangle OpenStreetMap Data
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Project 3: Wrangle OpenStreetMap Data\n",
"\n",
"--------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Author: Susan Li\n",
"### Date: February 8 2017"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Map Area:\n",
"------------\n",
"### Boston Massachusetts, United States\n",
"\n",
"* https://www.openstreetmap.org/export#map=11/42.3126/-70.9978\n",
"\n",
"* https://mapzen.com/data/metro-extracts/metro/boston_massachusetts/\n",
"\n",
"This map is of the city of Boston Massachusetts, the reason I chose this map is that we love Boston and visit the city often. I would like the opportunity to make the contribution to its improvement on openstreetmap.org. With Boston Marathon is only a couple of months away, we can't wait to go back."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problems Encountered in the Map\n",
"---------\n",
"After downloading the dataset and running it against a provisional data.py file, I noticed the following main problems with the dataset, as follows.\n",
"\n",
"* Nonuniform and abbreviated street types ('St.', 'St', 'ST', 'Ave', 'Ave.', 'Pkwy', 'Ct', 'Sq', etc).\n",
"\n",
"* Inconsistent postal codes ('MA 02118', '02118-0239')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Abbrevated street names\n",
"This part of the project is to auditing and cleaning nonuniform and abbreviated street types. I will attemp to replace street type \"ST, St, St., St, st, street\" with \"Street\", replace \"Ave, Ave.\" with \"Avenue\", replace \"Pkwy\" with \"ParkWay\", replace \"Rd, Rd., rd.\" with \"Road\", replace \"Ct\" with \"Court\", replace \"Dr\" with \"Drive\", replace \"Hwy\" with \"Highway\", and replace \"Sq\" with \"Sqaure\". The following are the steps I will be taking, as showing in audit.py. \n",
"\n",
"* Create a list of expected street types that do not need to be cleaned.\n",
"* The function \"audit_street_type\" collects the last words in the \"street_name\" strings, if they are not within the expected list, then stores them in the dictionary \"street_types\". This will allow me to see the nonuniform and abbreviated street types being used and give me a better sense of the data as a whole.\n",
"* The \"is_street_name\" function looks for tags that specify street names (k=\"addr:street\"). \n",
"* The \"audit\" function returns a dictionary that match the above function conditions. \n",
"* The update_name function takes an old name to mapping dictionary, and update to a new one."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def update_name(name, mapping):\n",
" \"\"\"takes an old name to mapping dictionary, and update to a new one\"\"\"\n",
" m = street_type_re.search(name)\n",
" if m not in expected:\n",
" if m.group() in mapping.keys():\n",
" name = re.sub(m.group(), mapping[m.group()], name)\n",
" \n",
" return name"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This updated substrings in abbreviated address strings, such that: \"Boston St\" becomes: \"Boston Street\"."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### Validate Postal Codes\n",
"* Boston area postal codes begin with \"021, 024 and 022\". The formats of the post code are vary, some are five-digits, some are nine-digits, and some start from state characters. \n",
"* Validating postal codes is a bit complicated. However, I decided to drop leading and trailing characters before and after the main 5 digit post code. This dropped the state characters \"MA\" from \"MA 02118\", and dropped \"0239\" from \"02118-0239\", using the following function in audit_postcode.py."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def update_postcode(postcode):\n",
" \"\"\"Clean postcode to a uniform format of 5 digit; Return updated postcode\"\"\"\n",
" if re.findall(r'^\\d{5}$', postcode): # 5 digits 02118\n",
" valid_postcode = postcode\n",
" return valid_postcode\n",
" elif re.findall(r'(^\\d{5})-\\d{4}$', postcode): # 9 digits 02118-0239\n",
" valid_postcode = re.findall(r'(^\\d{5})-\\d{4}$', postcode)[0]\n",
" return valid_postcode\n",
" elif re.findall(r'MA\\s*\\d{5}', postcode): # with state code MA 02118\n",
" valid_postcode =re.findall(r'\\d{5}', postcode)[0] \n",
" return valid_postcode \n",
" else:\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare for Database - SQL\n",
"The next step is to prepare the data to be inserted into a SQL database in data.py.\n",
"\n",
"* To do so I will parse the elements in the OSM XML file, transforming them from document format to tabular format, thus making it possible to write to .csv files. These csv files can then easily be imported to a SQL database as tables.\n",
"* The \"shape_element\" function is used to transform each element in the correct format, the function is passed each individual parent element from the \".osm\" file, one by one (when it is called in the \"process_map\" function)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Overview Statistics of The Dataset and Database Queries\n",
"This section contains basic statistics about the dataset, the SQL queries used to gather them, and some additional ideas about the data in context."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The size of each file"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The boston_massachusetts.osm file is 434.233159 MB\n",
"The project.db file is 135.95136 MB\n",
"The nodes.csv file is 46.200929 MB\n",
"The nodes_tags.csv file is 46.200929 MB\n",
"The ways.csv file is 22.793719 MB\n",
"The ways_tags.csv is 21.643234 MB\n",
"The ways_nodes.csv is 13.488694 MB\n"
]
}
],
"source": [
"import os\n",
"print('The boston_massachusetts.osm file is {} MB'.format(os.path.getsize('boston_massachusetts.osm')/1.0e6))\n",
"print('The project.db file is {} MB'.format(os.path.getsize('project.db')/1.0e6))\n",
"print('The nodes.csv file is {} MB'.format(os.path.getsize('nodes.csv')/1.0e6))\n",
"print('The nodes_tags.csv file is {} MB'.format(os.path.getsize('nodes.csv')/1.0e6))\n",
"print('The ways.csv file is {} MB'.format(os.path.getsize('ways.csv')/1.0e6))\n",
"print('The ways_tags.csv is {} MB'.format(os.path.getsize('ways_tags.csv')/1.0e6))\n",
"print('The ways_nodes.csv is {} MB'.format(os.path.getsize('ways_nodes.csv')/1.0e6)) # Convert from bytes to MB"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create SQL Database and Tables"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import sqlite3\n",
"import csv\n",
"from pprint import pprint\n",
"\n",
"sqlite_file = \"project.db\"\n",
"conn = sqlite3.connect(sqlite_file)\n",
"cur = conn.cursor()\n",
"\n",
"cur.execute('DROP TABLE IF EXISTS nodes')\n",
"conn.commit()\n",
"\n",
"cur.execute('''\n",
" Create Table nodes(id INTEGER, lat REAL, lon REAL, user TEXT, uid INTEGER, version TEXT, changeset INTEGER, timestamp TEXT)\n",
"''')\n",
"\n",
"conn.commit()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"with open('nodes.csv','r', encoding = 'utf-8') as fin:\n",
" dr = csv.DictReader(fin)\n",
" to_db = [(i['id'], i['lat'], i['lon'], i['user'], i['uid'], i['version'], i['changeset'], i['timestamp']) for i in dr]\n",
" cur.executemany(\"INSERT INTO nodes(id, lat, lon, user, uid, version, changeset, timestamp) VALUES (?,?,?,?,?,?,?,?);\", to_db)\n",
"\n",
"conn.commit()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of nodes"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(1048574,)]\n"
]
}
],
"source": [
"cur.execute(\"SELECT COUNT(*) FROM nodes;\")\n",
"print(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of ways"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(618026,)]\n"
]
}
],
"source": [
"cur.execute(\"SELECT COUNT(*) FROM ways;\")\n",
"print(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of unique users"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(922,)]\n"
]
}
],
"source": [
"cur.execute(\"SELECT COUNT(DISTINCT(e.uid)) FROM (SELECT uid FROM nodes UNION ALL SELECT uid FROM ways) e;\")\n",
"print(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Top 10 contributing users"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[('', 833300),\n",
" ('crschmidt', 402572),\n",
" ('jremillard-massgis', 231863),\n",
" ('MassGIS Import', 63279),\n",
" ('wambag', 30943),\n",
" ('ryebread', 20247),\n",
" ('OceanVortex', 17772),\n",
" ('Ahlzen', 5092),\n",
" ('ingalls_imports', 4220),\n",
" ('morganwahl', 3740)]\n"
]
}
],
"source": [
"import pprint\n",
"cur.execute(\"SELECT e.user, COUNT(*) as num FROM (SELECT user FROM nodes UNION ALL SELECT user FROM ways) e GROUP BY e.user ORDER BY num DESC LIMIT 10;\")\n",
"pprint.pprint(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of Starbucks"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(55,)]\n"
]
}
],
"source": [
"cur.execute(\"SELECT COUNT(*) FROM nodes_tags WHERE value LIKE '%Starbucks%';\")\n",
"print(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Count Tourism Related Categories Descending"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[('hotel', 87),\n",
" ('museum', 54),\n",
" ('artwork', 48),\n",
" ('attraction', 33),\n",
" ('viewpoint', 30),\n",
" ('picnic_site', 26),\n",
" ('information', 25),\n",
" ('guest_house', 8),\n",
" ('hostel', 4),\n",
" ('motel', 3),\n",
" ('aquarium', 2),\n",
" ('chalet', 2),\n",
" ('zoo', 2),\n",
" ('gallery', 1),\n",
" ('theme_park', 1)]\n"
]
}
],
"source": [
"import pprint\n",
"cur.execute (\"SELECT tags.value, COUNT(*) as count FROM (SELECT * FROM nodes_tags UNION ALL \\\n",
" SELECT * FROM ways_tags) tags \\\n",
" WHERE tags.key LIKE '%tourism'\\\n",
" GROUP BY tags.value \\\n",
" ORDER BY count DESC;\")\n",
"pprint.pprint(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Further Data Exploration"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Number of Restaurants in each city descending"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[('Cambridge', 51),\n",
" ('Boston', 44),\n",
" ('Somerville', 18),\n",
" ('Brookline', 10),\n",
" ('East Boston', 5),\n",
" ('Brookline, MA', 4),\n",
" ('Arlington', 3),\n",
" ('Medford', 3),\n",
" ('Watertown', 3),\n",
" ('Allston', 2),\n",
" ('Arlington. MA', 2),\n",
" ('Charlestown', 2),\n",
" ('Chestnut Hill', 2),\n",
" ('Jamaica Plain', 2),\n",
" ('Malden', 2),\n",
" ('2067 Massachusetts Avenue', 1),\n",
" ('Arlington, MA', 1),\n",
" ('Boston, MA', 1),\n",
" ('Cambridge, MA', 1),\n",
" ('Cambridge, Massachusetts', 1),\n",
" ('Chelsea', 1),\n",
" ('Everett', 1),\n",
" ('Quincy', 1),\n",
" ('Watertown, MA', 1),\n",
" ('boston', 1),\n",
" ('somerville', 1),\n",
" ('winthrop', 1)]\n"
]
}
],
"source": [
"import pprint\n",
"cur.execute(\"SELECT nodes_tags.value, COUNT(*) as num FROM nodes_tags JOIN (SELECT DISTINCT(id) \\\n",
" FROM nodes_tags WHERE value = 'restaurant') i ON nodes_tags.id = i.id WHERE nodes_tags.key = 'city'\\\n",
" GROUP BY nodes_tags.value ORDER BY num DESC;\")\n",
"\n",
"pprint.pprint(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Top 5 Most Popular Fast Food Chain"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(\"Dunkin' Donuts\", 12),\n",
" ('Subway', 11),\n",
" (\"McDonald's\", 8),\n",
" ('Burger King', 7),\n",
" (\"Wendy's\", 5)]\n"
]
}
],
"source": [
"cur.execute (\"SELECT nodes_tags.value, COUNT(*) as num FROM nodes_tags JOIN (SELECT DISTINCT(id) \\\n",
" FROM nodes_tags WHERE value='fast_food') i ON nodes_tags.id=i.id WHERE nodes_tags.key='name' \\\n",
" GROUP BY nodes_tags.value ORDER BY num DESC LIMIT 5;\")\n",
"\n",
"pprint.pprint(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Top 5 Cafe Chain"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(\"Dunkin' Donuts\", 41),\n",
" ('Starbucks', 41),\n",
" ('Au Bon Pain', 7),\n",
" (\"Peet's Coffee\", 5),\n",
" ('Starbucks Coffee', 5)]\n"
]
}
],
"source": [
"cur.execute (\"SELECT nodes_tags.value, COUNT(*) as num FROM nodes_tags JOIN (SELECT DISTINCT(id) \\\n",
" FROM nodes_tags WHERE value='cafe') i ON nodes_tags.id=i.id WHERE nodes_tags.key='name' \\\n",
" GROUP BY nodes_tags.value ORDER BY num DESC LIMIT 5;\")\n",
"\n",
"pprint.pprint(cur.fetchall())"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## Additional Ideas\n",
"\n",
"After the above review and analysis,, it is obvious that the Boston area is not complete and accurate, though I believe street names and post codes have been cleaned for the purpose of this exercise. For example from above queries, \"Starbucks\" appeared twice in the same list and \"Dunkin Donuts\" is both fast food restaurant and cafe. A more standardized data entry should be implemented.\n",
"\n",
"In addition, I noticed that there are significant street address data and postcodes data missing when I was updating street names postcodes at the start of the project. Validating data is an important part of OpenStreetMap, having experienced volunteers check OSM data to make sure it is complete, accurate and thorough by using [Quality Assurance tools](http://wiki.openstreetmap.org/wiki/Quality_assurance).\n",
"\n",
"Also from the above review, there are only 80 hotels in Boston. A quick search on [booking.com](http://www.booking.com/city/us/boston.html) gives me a result of 304 properties. I think OpenStreetMap should encourage hotels, restaurants and other local business owners to publish on OSM, and edit their own content to make it more accurate and make it available for everyone to benefit from."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Benefits and Anticipated Problems in Implementing the Improvement"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Benefits\n",
"\n",
"* Local people search maps to look for amenities and tourists search maps to look for interesting attractions and popular restaurants. There are potential to extend OpenStreetMap to include travel, walking and biking tour information, user reviews of establishments, local public transit etc. If more people use it, more people adding data, and this makes the maps better for everyone. This could expand OpenStreetMap's user base, and become more consumer focused.\n",
"\n",
"* Parks, schools, hotels, restaurants and other local businesses should be able to update and edit their own data on OSM, to ensure the accuracy and completeness.\n",
"\n",
"* If more businesses tap OSM for its own service, and edit content to make it more detailed, precise and accurate, this will also make OSM community more diverse."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Anticipated problems\n",
"\n",
"* As we all aware, OpenStreeMap has no paid employees, no premises. The missing post codes problem lies in not being able to convert a post code to a latitude/longitude figure, and then back again. The data to easily enable this isn’t available in many countries. And it's easier to figure out street names by walking around, but not post codes.\n",
"\n",
"* From an end-user's perspective, how many people have ever used OpenStreetMap? Or even heard about it? Can anyone relies on OSM as much as slick Google app? If there is not enough awareness, how could we expect local businesses to pay attention to it?\n",
"\n",
"* Because there is no cost to use the data, if a company taps OSM for its own service such as a mobile app. It is hard for OSM to get the attribution it deserves. According to OSM foundation chairman Simon Poole, [lack of attribution is a major issue](http://www.openstreetmap.org/user/SimonPoole/diary/20719)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notebook created by: [Susan Li](https://www.linkedin.com/in/susanli/)\n",
"\n",
"Copyright © 2016 [Analytics Toronto](https://analyticstoronto.com/)"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment