Skip to content

Instantly share code, notes, and snippets.

@bmount
Created November 22, 2012 13:27
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bmount/4131172 to your computer and use it in GitHub Desktop.
Save bmount/4131172 to your computer and use it in GitHub Desktop.
TRB Feedback

The Assembly and Interpretation of Transit Data

A freeway, Tilton warned, "is a device which can make or break the city. It can liberate or contribute to congestion. It can cut the city into unrelated parts, or bind it together. It can destroy values, or create new ones. The State cannot soundly develop its urban freeway plans without attention to the planning problems of the city itself." Tilton criticized the state for a narrow approach that considered merely "the assembly and interpretation of traffic data... Failure to provide for transit service on the Freeway will result in an unmanageable deluge of private automobiles in the already congested areas of the city."

William Issel, quoting L. Deming Tilton in 1945, The Pacific Historical Review, Vol. 68, No. 4 (Nov., 1999), pp. 611-646

These notes and thoughts are feedback to Eric Fischer's paper for the Transit Review Board of North America [link to draft].

A couple of quick premises:

  1. It's remarkable how little is known about city streets, about the use and nature of transit networks, about how to successfully design and implement urban systems. It's also impressive how infrequently successful places were designed by anyone born after 1895, or have transit systems built by anyone born after 1930. It's hard to know why and how streets are good, or how to get to them.

  2. Not very much has changed in terms of traffic engineering since 1945, except that the unmanageable deluge of private autos is no longer hypothetical. There are consequential improvements, though, and it's worth trying to give the people who will likely continue to make design decisions about streets some meaningful numbers on which to base or defend alternatives as the profession advances.

Is it possible to figure out enough about #1 to do #2?

I think so, but I'm mostly guessing. The basic approach I would try to take is described well by Allen Downey in his book Think Complexity:

I have described classical models as based on physical laws, expressed in the form of equations, and solved by mathematical analysis; conversely, models of complexity systems are often based on simple rules and implemented as computations.

We can think of this trend as a shift over time along two axes:

Equation-based → simulation-based

Analysis → computation

Here are some of the approaches we reflexively take as first recourse in looking at street use problems in the real world:

  • movements of groups of people or vehicles as fluids in a channelized flow
  • amenities and destinations as attractive electrical or gravitational forces
  • street hazards or obstructions as sources of friction or repelling force
  • optimization questions as peaks or troughs of continuous functions

This is how traffic engineers do it now, here's an example that is fundamental to all traffic engineering and combines several of the above, a traffic stream capacity maximization model:

ite flow v density

I don't necessarily doubt the validity of saying that the maximum car throughput of a street is described by that graph, but I'm sure its value is overestimated, I can barely imagine a situation outside an inter-city highway where it would even be worth calculating. Here is another smooth-lined, perfectly crazy Procrustean bed of a diagram:

functional classification

The traffic engineering process seems to be: create a hyper-simplified model, then build cities to match them, from inhospitable arterials down to inaccessible dendritic suburban cul-de-sacs, defining mobility from the outset as the traversal of large distances, rather than connectivity between places.

There is some sign though that these oversimplified models are losing favor and will be replaced by something; here are a few encouraging words from the ITE:

Statistical methods and applications have revolutionized many disciplines of science and engineering. This is notably so in the area of traffic engineering, which primarily deals with quantitative data in the planning, design and operation of transportation facilities. A significant development in the last 10-15 years in all areas of engineering that rely upon making statistical inferences from large-volume (and occasionally noisy) data sets, has been the ready availability of powerful statistical-analysis packages...

A. The Influence of Bayesian Methods and Markov-Type Stochastic Processes

The profession is also seeing the impact of enormously improved computing capabilities. Many methods once considered to be largely intractable are now routinely being used to solve engineering problems. For example, Bayesian statistical methods were once limited to a small set of tractable problems, but they now are becoming increasingly accessible through the development of Markov Chain Monte Carlo methods (MCMC methods). See Table 6-12 for some software packages that implement MCMC methods. Re-sampling methods are also seeing widespread use through advanced computer applications.

The Traffic Engineering Handbook 6th Ed.

What they mean is: can you create models of traffic that do not rely on the very rough analogies to fluid mechanics that traffic studies have always used, by simulating many vehicles moving around the network? Yes, you can, but you could maybe do much more than that: there's no reason not to try starting with people and leave out the assumptions about driving or even particular destinations. Take the kinds of things revealed in tweets, foursquare check-ins, and build these impressive mostly continuous functions:

popularity v speed geotags per speed

Mega Handwaves (if people don't geo-enabled tweet enough, make some up for them)

Then use those functions as inputs into some sort of discrete actor model. The effects shown in tweet data are real, the attractive forces of interesting places are identifiable: maybe use them as parameters for testing hypotheses. Imagine little bots traveling randomly according to the probabilities identified in the tweets, then try to see the effects of modifying the world slightly. For example, how often do people pass through a place that is dense with points of interest to go to another one farther away? Is travel a search for a kind of place, rather than a particular destination? Do a bunch of people travel past a common intermediate node on their way to tweet-dense places? Try some bots where the answers to those questions are "infrequently", "yes" and "yes". (my personal hypothesis btw.) How would their travel behavior change if these intermediate, currently empty nodes gained some of their own attractive force? If they became endpoints that shortened the search for a destination, could that have cascading effects on traffic - ie could you reduce the "flow density" concept in that first graph above, and actually increase overall car mobility by reducing car speeds? It's not out of the question, I kind of think it's likely.

There are millions of potential hypotheses: I had a professor once who said that the only way to solve problems is to know the answer beforehand, or guess correctly. I don't think the good ideas in urban design are that obscure. But the value in computer-first problem-solving is that there are a lot of problems where the answer is guessable but hard to prove until you "see" it. There's a fantastic story about the Monty Hall problem, in which a pretty weird but not inaccessible result was disbelieved Paul Erdős until he saw a computer simulation of it! (I think good visualizations function in exactly the same way.)

TL;DR: I have a lot of confidence in your ability to continue figuring out interesting things from this data and wish I had less notional feedback; computers will continue to be essential

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment