Not-so-smart cities

First, I want to say I totally agree with the last half of the last sentence in Greg Lindsay’s opinion piece in the New York Times:

…the smartest cities are the ones that embrace openness, randomness and serendipity — everything that makes a city great.

The rest of the piece I don’t quite get.  Lindsay objects to the new city being built in New Mexico which will have no residents, but be used solely for testing “smart city” technology like “smart power grids, cyber security and intelligent traffic and surveillance systems”.  He objects because he feels computer simulations are not robust enough to capture human’s inherent “randomness”.  To support his case, he uses an example of a RAND corporation study, from 1968 (!), that failed to “smartly” reconfigure fire service.

Take the 1968 decision by New York Mayor John V. Lindsay to hire the RAND Corporation to streamline city management through computer models. It built models for the Fire Department to predict where fires were likely to break out, and to decrease response times when they did. But, as the author Joe Flood details in his book “The Fires,” thanks to faulty data and flawed assumptions — not a lack of processing power — the models recommended replacing busy fire companies across Brooklyn, Queens and the Bronx with much smaller ones.

What RAND could not predict was that, as a result, roughly 600,000 people in the poorest sections of the city would lose their homes to fire over the next decade. Given the amount of money and faith the city had put into its models, it’s no surprise that instead of admitting their flaws, city planners bent reality to fit their models — ignoring traffic conditions, fire companies’ battling multiple blazes and any outliers in their data.

The final straw was politics, the very thing the project was meant to avoid. RAND’s analysts recognized that wealthy neighborhoods would never stand for a loss of service, so they were placed off limits, forcing poor ones to compete among themselves for scarce resources. What was sold as a model of efficiency and a mirror to reality was crippled by the biases of its creators, and no supercomputer could correct for that.

First, any good planner or engineer will tell you that models and software should be a starting point, not a finishing point.  I have no doubt that any new technology that comes out of the Center for Innovation, Testing and Evaluation (that is the new city’s name) will be refined in the real world as it’s performance among us mammals is tested.  If the RAND corporation couldn’t (or wouldn’t) adjust in 1968, they were bad planners.

Second, we shouldn’t use technology because politics could get in the way?  Don’t fault technology, fault bad process and implementation.  Also, where does this line of reasoning lead us?

Third and finally, this is the only example Lindsay gives of a failure of “smart” systems in the real world (except for a reference to something Jane Jacobs said), and it occurred in 1968.  Lindsay omits the myriad “smart city” technologies that are already commonplace and are generally deemed to have net positive impacts.  Here is a partial list (and I’m no expert):

And coming soon:
None of the second list, and I’m pretty sure none of the first list (at least computerized versions thereof) even existed in 1968.  The fact that many of these systems currently exist, and regularly operate without massive failure, seems to refute Lindsay’s assertion that we shouldn’t continue to develop them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s