Skip to content

Instantly share code, notes, and snippets.

@benmarwick
benmarwick / gdoc-revisions-analysis.R
Last active October 31, 2022 18:51
How to get access to specific revisions (and details about it) of a google drive document? cf. https://github.com/tidyverse/googledrive/issues/218
# related to my question here: https://github.com/tidyverse/googledrive/issues/218
# How to get access to specific revisions of a google drive document? Yes, we can do that.
# How to get the username for each revision, and the size of the document at each revision (bytes or words, assuming we are specifically talking about docs and not sheets or other types of files) Yes, we can do that.
# Goal is to measure the contributions of each author to a collaboratively-authored google doc with as little work as possible
# How to get access to specific revisions of a google drive document?
# Getting the revisions for a Google Doc
@csswizardry
csswizardry / README.md
Last active April 2, 2024 20:17
Vim without NERD tree or CtrlP

Vim without NERD tree or CtrlP

I used to use NERD tree for quite a while, then switched to CtrlP for something a little more lightweight. My setup now includes zero file browser or tree view, and instead uses native Vim fuzzy search and auto-directory switching.

Fuzzy Search

There is a super sweet feature in Vim whereby you can fuzzy find your files using **/*, e.g.:

:vs **/*<partial file name><Tab>
@joewiz
joewiz / post-mortem.md
Last active September 3, 2023 11:57
Recovery from nginx "Too many open files" error on Amazon AWS Linux

On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:

2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files)

2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...

An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.

  1. * Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits (where pid is the process id retrieved from ps). (Note: sudo may be necessary on your system for the cat command here, depending on your system.)
  2. Added fs.file-max = 70000 to /etc/sysctl.conf