I hereby claim:
- I am francisbarton on github.
- I am fbarton (https://keybase.io/fbarton) on keybase.
- I have a public key whose fingerprint is E4E9 8FEF C41D 27CB 0C85 8C69 6DC4 22A9 890E 62A8
To claim this, I am signing this object:
<?xml version="1.0" encoding="utf-8"?> | |
<style xmlns="http://purl.org/net/xbiblio/csl" class="in-text" version="1.0" demote-non-dropping-particle="sort-only" default-locale="en-GB"> | |
<info> | |
<title>Harvard - University of Gloucestershire</title> | |
<id>http://www.zotero.org/styles/harvard-university-of-gloucestershire</id> | |
<link href="http://www.zotero.org/styles/harvard-university-of-gloucestershire" rel="self"/> | |
<link href="http://www.zotero.org/styles/harvard-sheffield" rel="template"/> | |
<link href="http://ist.glos.ac.uk/referencing/harvard/" rel="documentation"/> | |
<author> | |
<name>Francis Barton</name> |
I hereby claim:
To claim this, I am signing this object:
I thought this would be quite a quick job.
The easiest thing to do would have been to work in Excel using manually downloaded data from websites. But I wanted to construct a script to retrieve, process and combine the data [programatically][] and [reproducibly][] and preferably [DRY][]-ly as well.
# load packages ----------------------------------------------------------- | |
library(rlang) | |
library(dplyr) | |
library(tidyr) | |
library(magrittr) | |
library(purrr) | |
library(nomisr) | |
I asked [a question on Stack Overflow][soq] about a super-annoying problem I was experiencing. | |
I created a [reprex][repr] for it and posted it [as a gist here][gist1] but in the end I did not need to point to the whle reprex: the slightly edited, shorter reprex I posted on the SO q was sufficient. | |
Within a matter of minutes the question had received a very accurate and helpful reply from [Eugene Chong][ec_up]. | |
[soq]: https://stackoverflow.com/questions/60155799/how-can-i-use-map-and-mutate-to-convert-a-list-into-a-set-of-additional-columns | |
[repr]: https://reprex.tidyverse.org/articles/reprex-dos-and-donts.html | |
[gist1]: https://gist.github.com/francisbarton/3c9f755a7f17ce5624edb9d4da0f4f59 | |
[ec_up]: https://www.design.upenn.edu/city-regional-planning/graduate/work/developing-new-metrics-transportation-safety-cyclist-and |
<!-- language-all: lang-r --> | |
library(nominatim) | |
#> Data (c) OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright | |
#> Nominatim Usage Policy: http://wiki.openstreetmap.org/wiki/Nominatim_usage_policy | |
#> MapQuest Nominatim Terms of Use: http://info.mapquest.com/terms-of-use/ | |
library(sf) | |
#> Linking to GEOS 3.6.1, GDAL 2.2.3, PROJ 4.9.3 | |
library(raster) |
In response to a [question from Stewart Lee][disqus-sl] (presumably not that one)
I have a numeric column and want to filter all number ending with (.999). Tried a couple of tricks and failed. Any suggestion?
suppressPackageStartupMessages(library(dplyr))
options(pillar.sigfig = 6)
# a misc. list of numbers just to test out my code against different targets
--- | |
title: "setup test" | |
output: html_document | |
--- | |
```{r setup} | |
library(dplyr) | |
print("Hello") | |
``` |
``` r | |
library(dplyr, warn.conflicts = FALSE) | |
library(purrr) | |
library(rlang, warn.conflicts = FALSE) | |
library(stringr) | |
filenames <- c("coronavirus_cases_202007061134.csv", "coronavirus_cases_202007071134.csv", "coronavirus_cases_202007081134.csv") | |
cases <- c(1000, 1500, 2000) | |
# couple of functions |
# Use the doogal.co.uk API to get data about a postcode | |
# Doesn't accept a vector of codes all at once, so use with purrr::map_dfr() | |
# along a vector to combine results into a data frame | |
get_doogal_data <- function(postcode) { | |
data_names <- c( | |
"postcode", | |
"latitude", | |
"longitude", |