Skip to content

Instantly share code, notes, and snippets.

@jandrieu
Last active July 31, 2016 21:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jandrieu/0597bac34c944c8932e2cde67b63a1c7 to your computer and use it in GitHub Desktop.
Save jandrieu/0597bac34c944c8932e2cde67b63a1c7 to your computer and use it in GitHub Desktop.
Back of the napkin calculations for adding Internet to every yard of sidewalk in America

If you centralize data reporting for every yard of sidewalk in the US, here’s a back-of-the-napkin calculation:

  • Miles of paved roads in the US: 2,605,531 wikipedia
  • Estimated miles of sidewalk in the US: 5,210,662 or ~5 million
  • Estimated yards of sidewalk in the US: 9,170,765,120, aka ~9 billion

Assume 1 minimum size IPv6 packet (84 bytes) from every yard going to a single central processing destination. With no losses, resending, or other congestion.

  • Once/day = 71 Mbps
  • Once/hour = 1.7 Gbps
  • Once/minute = 102.7 Gbps

Of course, not all the paved roads have sidewalks (and there are “sidewalks” without paved roads). In the other direction, you will have lost packets and will not have 100% network utilization.

Even 71 MBps sustained is non-trivial. Since not all nodes are going to have direct access to the processing destination, you need to route whatever is created upward to that point. Network topology and node capacity, in for example, a mesh network starts to become a big issue.

Investing the networking and monitoring/admin capabilities just to get “sidewalk” health is almost certainly unreasonable economically.

More importantly, what data are you really going to get from a minimal packet set once/day? When you actually get into a real use case, it gets more complicated.

I worked on the estimates for a telephone pole monitoring network. The problem was that hearbeats once every 15 minutes was fine. But when events happened (hurricanes, tornados, etc.), not only did every node need to report ASAP, but the network topology and the power was at risk. Big events like that would hammer the network (in our models) and if chunks of the net were down or out of power, it squeezed the bandwidth even further.

The scale gets out of hand quickly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment