Via a friend, a video in which autonomous vehicles are already with us. And picking up fast food.
The article I linked to earlier by Mims sent me to Do the Math, which I find very intruiging. It’s written by Tom Murphy, an associate professor of physics at UC San Diego. He writes about growth, energy and economics, but from a physical science point of view, which is fascinating. Some most-read posts include Galactic-Scale Energy, Can Economic Growth Last and Sustainability Means Bunkty to Me.
If you only read one post, read Exponential Economist Meets Finite Physicist.
Act One: Bread and Butter
Physicist: Hi, I’m Tom. I’m a physicist.
Economist: Hi Tom, I’m [ahem..cough]. I’m an economist.
Physicist: Hey, that’s great. I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.
Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth cannot continue forever?
Physicist: That’s right. I think physical limits assert themselves.
Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.
Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.
They go all the way through dessert.
He’s also obsessed/extremely dedicated to reducing his personal energy use footprint, and writes about his exploits pinching therms in graphic detail.
The only downside to Tom’s blog is that he doesn’t include full text articles in his RSS feed. But you should subscribe anyway.
Your first robot car might not be the totally-robot google kind, but a lesser robot that only takes over when you’re on the freeway. Volvo has taken their long-running road train test to public streets in Spain.
Volvo used three vehicles – a XC60, a V60 and a S60 – that drove autonomously following a truck for 200 kilometers (124 miles) at 85 kilometers an hour (53 miles per hour) on the roads outside Barceolona. The follows vehicles used “cameras, radar and laser sensors” and wireless communication to copy what the lead vehicle is doing “using Ricardo autonomous control – accelerating, braking and turning in exactly the same way as the leader.” The vehicles were about six meters (20 feet) apart.
I’ve been meaning to write a “how this urbanist stopped worrying and learned to love the driverless car” post for a while, but I’ve finally been spurred into action by this piece in the Atlantic Cities by Greater Greater Washington founder David Alpert. Right up front I want to say I still have a lot of concerns about how we plan and incorporate robot cars, but on this issue of competing road users, I take a different view.
Alpert’s contention is that in our society’s haste to adopt driverless cars, we will “intensify current tensions” between drivers (more accurately called passengers in a robot car-filled world of the future) and non-auto users, such as pedestrians and cyclists, who are trying to use the same right of way. I think this case is overstated for a number of reasons.
The author’s main evidence for the idea that tensions will be increased is reference to an animation done by some computer scientists that showed how to optimize an intersection when most of the cars are driverless, thus increasing flow. According to the article,
[H]uman-driven cars would have to wait for a signal that would be optimized based on what everyone else is doing. And the same would be true of pedestrians and bike riders.
And to that Alpert reacts:
That certainly sounds like all other users of the road will have to act at the convenience of the driverless cars, under constraints designed to maximize vehicle movement instead of balancing the needs of various users…
The video even depicts an intersection with a whopping 12 lanes for each roadway, at a time when most transportation professionals have come to believe that grids of smaller roads, not mega-arterials, are the best approach to mobility in metropolitan areas.
Driverless cars, therefore, are poised to trigger a whole new round of pressure to further redesign intersections for the throughput of vehicles above all else.
I’m not sure how this one animation demonstrates why driverless cars would trigger a gush of road-building or elimination of non-auto facilities. Setting aside the fact that I’m sure this animation was developed as a proof-of-concept (I can hear the research team now: “If we use 12 lanes in each direction, it will look even more impressive!”), this leads me to my first objection to the premise that driverless cars will increase tensions.
Driverless cars don’t make bad roads, people make bad roads
As Alpert himself states, “Already, cities host ongoing and raucous debates over the role of cars versus people on their streets. For over 50 years, traffic engineers with the same dreams about optimizing whizzing cars have designed and redesigned intersections to move more and more vehicles.” Yes, and we’ll continue to have this debate into the future whether robot cars are adopted or not. Given that gradual adoption of this technology is the most likely scenario (more on that later), I don’t see auto users getting more vocal (than they already are) about road capacity because there car has a few more widgets.
Building a balanced transportation system that looks at the full picture of quality of life rather than just mobility and speed will continue to be a challenge, although we seem to be making some progress in that direction. Issues of public health, environmental impact and land use impacts will probably always take some extra effort to incorporate into transportation decision-making, an effort organizations like Greater Greater Washington should continue to make. I view this as an institutional problem, failing to bring full information about transportation systems impacts to the design table, and it should be addressed in our decision-making processes.
12-lane at-grade intersections would make any cityscape pretty awful, but that leads me to my second objection:
Driverless cars can do more with less
Maybe the computer scientists at UT Austin should have showed a 2-lane 4-way intersection with driverless cars instead of a 12-lane intersection. They also should have showed a comparison with a present day intersection. One of the potential benefits of driverless cars is squeezing more flow or capacity out of the road systems we already have. Cars can drive closer together, and yes, maybe intersections can look more India-like. Potentially, we’ll get more from our existing concrete without having to widen or reduce non-auto infrastructure.
There is also this nagging funding issue. In Minnesota for example, we already can’t pay for all the roads we want. So 1) a huge explosion of more road-building probably isn’t likely and 2) driverless cars give us kind of another way out: if we’re intent on adding more capacity, maybe we can make our vehicles smarter rather than our roads wider.
Driverless cars are safer
The first forays into “driverless cars” are about collision detection and avoidance (see a long list of existing implementation here). Google’s driverless car has driven 200,000 miles and been involved in two accidents (both while being driven by a human). Before any cars are driving themselves around, their computer brains will just be allowed to stop us from having accidents. This is good for auto users and others alike. And their adoption will happen gradually (they’ll be pretty expensive at first).
It seems obvious that driverless cars will be programmed to not hit pedestrians and cyclists. Driverless cars will never (or very rarely) drive in a bike lane or right-hook a cyclist. And for the next fifty years, they’ll probably be operating on roadways that look very similar to what we have today, pedestrian cross-walks and all. The dys/utopian future where we have streets with tightly-spaced driverless cars traveling 200 mph is quite a ways off, and when that happens, why shouldn’t they be limited access and/or grade separated? Wouldn’t we require the same of high-speed rail?
Again, there are lots of other potential negative impacts we need to be aware of as driverless cars become common (see my summary here), but I think these can be addressed by human policy decisions. We also need to take some drastic action on emissions from transportation that contribute to climate change, and robot cars will likely not have a measurable impact there for some time (it’s also possible our action, if we take any, may actually delay their deployment).
The New York Times has an interesting article about the downsides of too frequently working in teams and/or not having enough solitary work time or space.
SOME teamwork is fine and offers a fun, stimulating, useful way to exchange ideas, manage information and build trust.
But it’s one thing to associate with a group in which each member works autonomously on his piece of the puzzle; it’s another to be corralled into endless meetings or conference calls conducted in offices that afford no respite from the noise and gaze of co-workers. Studies show that open-plan offices make workers hostile, insecure and distracted. They’re also more likely to suffer from high blood pressure, stress, the flu and exhaustion. And people whose work is interrupted make 50 percent more mistakes and take twice as long to finish it.
I find this particularly relevant in working in the public sector, where it is anathema to the current trends to make decisions independently or trust the detail work of “technical experts”. Current trends seem to be towards trusting in the ultimate wisdom of the group. Humans are not built to resist the downside of groupthink.
The reasons brainstorming fails are instructive for other forms of group work, too. People in groups tend to sit back and let others do the work; they instinctively mimic others’ opinions and lose sight of their own; and, often succumb to peer pressure. The Emory University neuroscientist Gregory Berns found that when we take a stance different from the group’s, we activate the amygdala, a small organ in the brain associated with the fear of rejection. Professor Berns calls this “the pain of independence.”
The article notes that the internet and electronic communication may provide an antidote for groupthink.
The one important exception to this dismal record is electronic brainstorming, where large groups outperform individuals; and the larger the group the better. The protection of the screen mitigates many problems of group work. This is why the Internet has yielded such wondrous collective creations. Marcel Proust called reading a “miracle of communication in the midst of solitude,” and that’s what the Internet is, too. It’s a place where we can be alone together — and this is precisely what gives it power.
Sometime in the next few weeks, Paul Brazelton will move his family into a 1935 Tudor in south Minneapolis that has no furnace. He’s just finished a massive renovation of the family home and even though winter’s bearing down, he removed the boiler and plans to use that basement space for his daughters’ home-school classroom.
He also took out the fireplace.
If this sounds like the most uninviting house (and classroom) in Minneapolis, there’s something else to know: Brazelton, a software engineer and passionate environmentalist, has nearly finished a retrofit of his house to the stringent engineering standards of the Passivhaus model, a German system of homebuilding that uses insulation and highly efficient doors and windows to save energy.
The finished 2,000-square-foot home could be warmed even in the dead of winter with a pair of small space heaters, Brazelton said, though the family plans to piggyback on their hot water heater and use an in-floor heating system in the basement.
The project is the renovation of an existing home to meet EnerPHit standard for energy performance. EnerPHit is a subset of the Passive House standard (hence the PH), which is an energy performance standard that requires very high levels of energy efficiency. The Passive House Institute has a summary:
A Passive House is a very well-insulated, virtually air-tight building that is primarily heated by passive solar gain and by internal gains from people, electrical equipment, etc. Energy losses are minimized. Any remaining heat demand is provided by an extremely small source. Avoidance of heat gain through shading and window orientation also helps to limit any cooling load, which is similarly minimized. An energy recovery ventilator provides a constant, balanced fresh air supply. The result is an impressive system that not only saves up to 90% of space heating costs, but also provides a uniquely terrific indoor air quality.
Passive House is a performance standard, meaning it doesn’t specify design features like LEED, but has performance characteristics that the building must meet after construction is complete. Namely an airtight building shell at ≤ 0.6 ACH @ 50 pascal pressure measured by a blower door test and a total heating & cooling demand of <4.7 kBtu/sq ft/yr. Total energy use needs to be ≤ 38.1 kBtu/ft2/yr.
In layman’s terms, this means Passive House designs are 11 times more airtight than a conventionally designed and built modern home. As for energy use, a typical single family detached home uses 76 kBtu/sq ft/yr. My own house was built in the 1920′s and currently has no wall insulation. In 2010, we used 89 kBtu/sq ft/yr in total, and I think we’re fairly frugal with our electricity. That means when the Brazelton family finishes their home, it will use less than half the total energy of my house and be 15% larger.
The Passive House standard doesn’t require or depend on renewable energy to achieve this high energy performance. It’s focused on minimizing, to the greatest extent possible, the loss of heat and capitalizing on natural heat sources like sunlight and even body heat. The MinnePHit house will be renewable-ready, but it won’t have renewables to start with. Paul, the owner, puts it eloquently:
…we decided to use our limited resources in building a house with the highest level of efficiency and durability. If maintained correctly, solar panels can last decades. On the other hand, insulation can last centuries. Looking again at the long term, the best investment is using less energy, not alternate energy.
Last but not least, this home is energy efficient because it is location efficient, located in South Minneapolis with nearby access to jobs, recreation and services. The Brazelton’s definitely don’t have to use an automobile for every trip, and they likely won’t be traveling far to their destination. The other local example of Passive House design can’t make that claim.
St. Paul has a nice video introducing their electric vehicle charging infrastructure. According to a presentation I saw at MNAPA, the City hopes to have 150 public stations available by 2015. They also estimated that the cost for installation was anywhere between $850 in parking ramps to $6,000 in on-street spaces.
Google has a pretty cool labs project called Fusion Tables, which I think most people don’t know about. One great feature is the ability to create a web map from georeferenced data quickly and easily. Great news, it’s getting even easier. From Steven Vance, news that you can now upload shapefiles (through a third-party site).
It is now possible to upload a shapefile (and its companion files SHX, PRJ, and DBF) to Google Fusion Tables (GFT).
Before we go any further, keep in mind that the application that does this will only process 100,000 rows. Additionally, GFT only gives each user 200 MB of storage (and they don’t tell you your current status, that I can see).
- Login to your Google account (at Gmail, or at GFT).
- Prepare your data. Ensure it has fewer than 100,000 rows.
- ZIP up your dataX.shp, dataX.shx, dataX.prj, and dataX.dbf. Use WinZip for Windows, or for Mac, right-click the selection of files and select “Compress 4 items”.
- Visit the Shape to Fusion website. You will have to authorize the web application to “grant access” to your GFT tables. It needs this access so that after the web application processes your data, it can insert it into GFT.
- If you want a Centroid Geometry column or a Simplified Geometry column added, click “Advanced Options” and check their checkboxes – see notes below for an explanation.
- Choose the file to upload and click Upload.
- Leave the window open until it says it has processed all of the rows. It will report “Processed Y rows and inserted Y rows”. You will be given a link to the GFT the web application created.
First, I want to say I totally agree with the last half of the last sentence in Greg Lindsay’s opinion piece in the New York Times:
…the smartest cities are the ones that embrace openness, randomness and serendipity — everything that makes a city great.
The rest of the piece I don’t quite get. Lindsay objects to the new city being built in New Mexico which will have no residents, but be used solely for testing “smart city” technology like “smart power grids, cyber security and intelligent traffic and surveillance systems”. He objects because he feels computer simulations are not robust enough to capture human’s inherent “randomness”. To support his case, he uses an example of a RAND corporation study, from 1968 (!), that failed to “smartly” reconfigure fire service.
Take the 1968 decision by New York Mayor John V. Lindsay to hire the RAND Corporation to streamline city management through computer models. It built models for the Fire Department to predict where fires were likely to break out, and to decrease response times when they did. But, as the author Joe Flood details in his book “The Fires,” thanks to faulty data and flawed assumptions — not a lack of processing power — the models recommended replacing busy fire companies across Brooklyn, Queens and the Bronx with much smaller ones.
What RAND could not predict was that, as a result, roughly 600,000 people in the poorest sections of the city would lose their homes to fire over the next decade. Given the amount of money and faith the city had put into its models, it’s no surprise that instead of admitting their flaws, city planners bent reality to fit their models — ignoring traffic conditions, fire companies’ battling multiple blazes and any outliers in their data.
The final straw was politics, the very thing the project was meant to avoid. RAND’s analysts recognized that wealthy neighborhoods would never stand for a loss of service, so they were placed off limits, forcing poor ones to compete among themselves for scarce resources. What was sold as a model of efficiency and a mirror to reality was crippled by the biases of its creators, and no supercomputer could correct for that.
First, any good planner or engineer will tell you that models and software should be a starting point, not a finishing point. I have no doubt that any new technology that comes out of the Center for Innovation, Testing and Evaluation (that is the new city’s name) will be refined in the real world as it’s performance among us mammals is tested. If the RAND corporation couldn’t (or wouldn’t) adjust in 1968, they were bad planners.
Second, we shouldn’t use technology because politics could get in the way? Don’t fault technology, fault bad process and implementation. Also, where does this line of reasoning lead us?
Third and finally, this is the only example Lindsay gives of a failure of “smart” systems in the real world (except for a reference to something Jane Jacobs said), and it occurred in 1968. Lindsay omits the myriad “smart city” technologies that are already commonplace and are generally deemed to have net positive impacts. Here is a partial list (and I’m no expert):
- Traffic models
- GIS fire service studies
- Building management systems
- Ramp metering, active traffic management, variable speed limits and a whole host of ITS technology
- Planes that land themselves
- Cars that drive themselves
- Cars that turn into trains
- The ability to predict traffic jams
- Computers that are indistinguishable from humans