A small experiment with solar

100 watt panel with wood mount

100 watt panel with wood mount

Being carbon-conscious, naturally inclined to tinker, and seeing the falling costs for components, I was curious to know whether I could put together a small solar PV system on my own.  Having experienced a few blackouts this summer and expecting more in the future, I was also curious about providing a small amount of backup power for essential items.  Here’s the story of my first foray into off-grid renewables.

This post at Do The Math (an excellent blog you should read regularly) in which Tom Murphy describes his small, off-grid system really got me started on the whole thing.  I’m not a physics professor, but after reading it and doing some additional google searching, it seemed easy enough for a lay-person armed with a small amount of reading.  A valuable resource (also provided by Tom’s blog) is the Solar Living Sourcebook, available at your local library, which provides all the basics on what solar PV is, how it works, important safety tips, and options for setup.  I also learned a few things from various youtube videos and generally google searching.

The system I put together is 12 volts, which seems very common for small, off-grid installations.  It’s basically only six things: a solar panel, a charge controller, a battery, an inverter, and assorted wires and fuses.  The solar panel provides the electrons, the charge controller controls how those electrons flow to the battery (and makes sure it doesn’t overcharge), the battery stores electrons, and the inverter turns the battery’s 12 volt DC power into 110 volt AC power so it can be used with regular household electronics.  The wires and fuses connect things together and provide safety.

Panel unboxing

Panel unboxing

You can now purchase relatively affordable panels from Amazon or Home Depot in many wattages and sizes.  I chose a 100 watt panel that seemed to receive good reviews and a website that suggested that the company might be around for a while.

Charge controller showing all systems go!

Charge controller showing all systems go!

The other pieces of the system (inverter and charge controller especially) come in a huge range of prices.  After some reading, I decided that it might be better to spend a little more on a charge controller, as many people had complaints about cheap versions, and keeping your battery well-maintained is important (the function the charge controller plays).  I purchased a 30 amp controller from Morningstar, which I think could power up to 300 400 watts of panels if I expand the system in the future.  The battery is rated at 80 amp hours, and is sealed lead acid.  I purchased it from a local battery store, and its a discount version.

80 ah battery

A bad photo of the 80 ah battery

So what can this thing power you ask? That’s a function of how much the panel produces, how much the battery stores, and how much amperage I can draw at one time from the battery and inverter.

According to some assumptions I pulled from NREL’s PVWatts tool, the panel might generate 400 watt hours per day (100 watts X 4 hours equivalent of 100% production) in the peak season and maybe 210 in the low season (November), although I’ve seen higher numbers in other places.  The battery is large enough to store all that daily production and more (80 amp hours X 12 volts = 960 watt hours).  In fact, it would probably take one and a half to two two and a half days of sun to fully charge the battery.

Even in the winter, the daily production of the panel would probably be enough to power a few lights (the LED variety), an efficient laptop, a fan, and a small TV for a few hours.  It won’t run a hotplate, anything but the smallest air conditioner, a heater, or a refrigerator, at least not continuously.  The battery and the inverter could probably handle it (one of these things), but the panel wouldn’t be able to keep up.  As far as a back-up power source, this set up would power my refrigerator for about 12 8 hours, and our 8.8 cubic foot chest freezer for about 24 16 hours.  That’s assuming a fully charged battery, the panel couldn’t keep up with the draw from those appliances for more than a day.  These are just my estimates, I don’t have any real world results yet, but will report back soon.  Right now I’ve got just the chest freezer plugged into the inverter and I’m going to time how long until I get the low battery warning.

Things I’ve learned so far:

  1. It’s all the other stuff that costs money.  At this stage, the panel itself only accounts for 20% of the cost.  If I added two three more panels (which would probably max out the charge controller) to economize, the panels would still only be 40% 51% of the cost.
  2. I need scale to “save” money.  Right now, my costs per watt are about 76% higher than what I have been quoted to put a grid-tied, full size system on my roof.  If I maxed out the charge controller with two more panels and got another battery, I could bring my costs in line with the pros (again, on a per watt basis).  Whether this would continue to scale up I kind of doubt, since batteries get expensive and I would get into more serious electrical work pretty quickly.
  3. I need another battery (or two) for large stuff.  High amperage appliances, like a vaccuum, seem to be within the wattage range of the inverter, but my battery is only 80 Ah.  The internet tells me I should only run things that are 10-12% in amps of that capacity to avoid shortening the batteries life, and indeed I got a low battery warning when trying to run the shopvac.
  4. You should think hard about where to locate a panel before you embark on this kind of project.  I’m still squeamish about getting into roofing for fear I will cause a leak, and others in my household disagree about the aesthetics of a home-built wooden frame.  My goal in the long run is to get this on a roof somewhere.
  5. In the future, our homes should probably run direct current (DC) rather than alternating current (AC).
  6. Solar panels aren’t just for tree-huggers.  If you’ve ever searched youtube for videos about solar back-up systems, you’ll be a lot less surprised about news items like the Atlanta Tea Party teaming with the Sierra Club to promote more solar.  Many of the instructional videos I watched were clearly made by folks of the conservative persuasion who were into solar because they feared the grid would go down or they weren’t comfortable being beholden to utilities/the government.  Maybe there is more common ground here than we thought.

 

Do the Math

The article I linked to earlier by Mims sent me to Do the Math, which I find very intruiging.  It’s written by Tom Murphy, an associate professor of physics at UC San Diego.  He writes about growth, energy and economics, but from a physical science point of view, which is fascinating.  Some most-read posts include Galactic-Scale Energy, Can Economic Growth Last and Sustainability Means Bunkty to Me.

If you only read one post, read Exponential Economist Meets Finite Physicist.

Act One: Bread and Butter

Physicist: Hi, I’m Tom. I’m a physicist.

Economist: Hi Tom, I’m [ahem..cough]. I’m an economist.

Physicist: Hey, that’s great. I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.

Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth cannot continue forever?

Physicist: That’s right. I think physical limits assert themselves.

Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.

Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.

They go all the way through dessert.

He’s also obsessed/extremely dedicated to reducing his personal energy use footprint, and writes about his exploits pinching therms in graphic detail.

The only downside to Tom’s blog is that he doesn’t include full text articles in his RSS feed.  But you should subscribe anyway.

Road train tested on public roads

Your first robot car might not be the totally-robot google kind, but a lesser robot that only takes over when you’re on the freeway.  Volvo has taken their long-running road train test to public streets in Spain.

Volvo used three vehicles – a XC60, a V60 and a S60 – that drove autonomously following a truck for 200 kilometers (124 miles) at 85 kilometers an hour (53 miles per hour) on the roads outside Barceolona. The follows vehicles used “cameras, radar and laser sensors” and wireless communication to copy what the lead vehicle is doing “using Ricardo autonomous control – accelerating, braking and turning in exactly the same way as the leader.” The vehicles were about six meters (20 feet) apart.

David Alpert on driverless cars

I’ve been meaning to write a “how this urbanist stopped worrying and learned to love the driverless car” post for a while, but I’ve finally been spurred into action by this piece in the Atlantic Cities by Greater Greater Washington founder David Alpert.  Right up front I want to say I still have a lot of concerns about how we plan and incorporate robot cars, but on this issue of competing road users, I take a different view.

Alpert’s contention is that in our society’s haste to adopt driverless cars, we will “intensify current tensions” between drivers (more accurately called passengers in a robot car-filled world of the future) and non-auto users, such as pedestrians and cyclists, who are trying to use the same right of way.  I think this case is overstated for a number of reasons.

The author’s main evidence for the idea that tensions will be increased is reference to an animation done by some computer scientists that showed how to optimize an intersection when most of the cars are driverless, thus increasing flow.  According to the article,

[H]uman-driven cars would have to wait for a signal that would be optimized based on what everyone else is doing. And the same would be true of pedestrians and bike riders.

And to that Alpert reacts:

That certainly sounds like all other users of the road will have to act at the convenience of the driverless cars, under constraints designed to maximize vehicle movement instead of balancing the needs of various users…

The video even depicts an intersection with a whopping 12 lanes for each roadway, at a time when most transportation professionals have come to believe that grids of smaller roads, not mega-arterials, are the best approach to mobility in metropolitan areas.

Driverless cars, therefore, are poised to trigger a whole new round of pressure to further redesign intersections for the throughput of vehicles above all else.

I’m not sure how this one animation demonstrates why driverless cars would trigger a gush of road-building or elimination of non-auto facilities.  Setting aside the fact that I’m sure this animation was developed as a proof-of-concept (I can hear the research team now: “If we use 12 lanes in each direction, it will look even more impressive!”), this leads me to my first objection to the premise that driverless cars will increase tensions.

Driverless cars don’t make bad roads, people make bad roads

As Alpert himself states, “Already, cities host ongoing and raucous debates over the role of cars versus people on their streets. For over 50 years, traffic engineers with the same dreams about optimizing whizzing cars have designed and redesigned intersections to move more and more vehicles.”  Yes, and we’ll continue to have this debate into the future whether robot cars are adopted or not.  Given that gradual adoption of this technology is the most likely scenario (more on that later), I don’t see auto users getting more vocal (than they already are) about road capacity because there car has a few more widgets.

Building a balanced transportation system that looks at the full picture of quality of life rather than just mobility and speed will continue to be a challenge, although we seem to be making some progress in that direction.  Issues of public health, environmental impact and land use impacts will probably always take some extra effort to incorporate into transportation decision-making, an effort organizations like Greater Greater Washington should continue to make.  I view this as an institutional problem, failing to bring full information about transportation systems impacts to the design table, and it should be addressed in our decision-making processes.

12-lane at-grade intersections would make any cityscape pretty awful, but that leads me to my second objection:

Driverless cars can do more with less

Maybe the computer scientists at UT Austin should have showed a 2-lane 4-way intersection with driverless cars instead of a 12-lane intersection.  They also should have showed a comparison with a present day intersection.  One of the potential benefits of driverless cars is squeezing more flow or capacity out of the road systems we already have.  Cars can drive closer together, and yes, maybe intersections can look more India-like.  Potentially, we’ll get more from our existing concrete without having to widen or reduce non-auto infrastructure.

There is also this nagging funding issue.  In Minnesota for example, we already can’t pay for all the roads we want.  So 1) a huge explosion of more road-building probably isn’t likely and 2) driverless cars give us kind of another way out: if we’re intent on adding more capacity, maybe we can make our vehicles smarter rather than our roads wider.

And finally:

Driverless cars are safer

The first forays into “driverless cars” are about collision detection and avoidance (see a long list of existing implementation here).  Google’s driverless car has driven 200,000 miles and been involved in two accidents (both while being driven by a human).  Before any cars are driving themselves around, their computer brains will just be allowed to stop us from having accidents.  This is good for auto users and others alike.   And their adoption will happen gradually (they’ll be pretty expensive at first).

It seems obvious that driverless cars will be programmed to not hit pedestrians and cyclists.  Driverless cars will never (or very rarely) drive in a bike lane or right-hook a cyclist.  And for the next fifty years, they’ll probably be operating on roadways that look very similar to what we have today, pedestrian cross-walks and all.  The dys/utopian future where we have streets with tightly-spaced driverless cars traveling 200 mph is quite a ways off, and when that happens, why shouldn’t they be limited access and/or grade separated?  Wouldn’t we require the same of high-speed rail?

Again, there are lots of other potential negative impacts we need to be aware of as driverless cars become common (see my summary here), but I think these can be addressed by human policy decisions.  We also need to take some drastic action on emissions from transportation that contribute to climate change, and robot cars will likely not have a measurable impact there for some time (it’s also possible our action, if we take any, may actually delay their deployment).

Robot cars could offer urbanists a myriad of benefits that Alpert doesn’t address (but which others have covered in detail, but that should probably wait for another post.

The rise of the new groupthink

The New York Times has an interesting article about the downsides of too frequently working in teams and/or not having enough solitary work time or space.

SOME teamwork is fine and offers a fun, stimulating, useful way to exchange ideas, manage information and build trust.

But it’s one thing to associate with a group in which each member works autonomously on his piece of the puzzle; it’s another to be corralled into endless meetings or conference calls conducted in offices that afford no respite from the noise and gaze of co-workers. Studies show that open-plan offices make workers hostile, insecure and distracted. They’re also more likely to suffer from high blood pressure, stress, the flu and exhaustion. And people whose work is interrupted make 50 percent more mistakes and take twice as long to finish it.

I find this particularly relevant in working in the public sector, where it is anathema to the current trends to make decisions independently or trust the detail work of “technical experts”.  Current trends seem to be towards trusting in the ultimate wisdom of the group.  Humans are not built to resist the downside of groupthink.

The reasons brainstorming fails are instructive for other forms of group work, too. People in groups tend to sit back and let others do the work; they instinctively mimic others’ opinions and lose sight of their own; and, often succumb to peer pressure. The Emory University neuroscientist Gregory Berns found that when we take a stance different from the group’s, we activate the amygdala, a small organ in the brain associated with the fear of rejection. Professor Berns calls this “the pain of independence.”

The article notes that the internet and electronic communication may provide an antidote for groupthink.

The one important exception to this dismal record is electronic brainstorming, where large groups outperform individuals; and the larger the group the better. The protection of the screen mitigates many problems of group work. This is why the Internet has yielded such wondrous collective creations. Marcel Proust called reading a “miracle of communication in the midst of solitude,” and that’s what the Internet is, too. It’s a place where we can be alone together — and this is precisely what gives it power.

Super energy efficiency for existing homes

 The Star Tribune has a story about the MinnePHit House in South Minneapolis.

Sometime in the next few weeks, Paul Brazelton will move his family into a 1935 Tudor in south Minneapolis that has no furnace. He’s just finished a massive renovation of the family home and even though winter’s bearing down, he removed the boiler and plans to use that basement space for his daughters’ home-school classroom.

He also took out the fireplace.

If this sounds like the most uninviting house (and classroom) in Minneapolis, there’s something else to know: Brazelton, a software engineer and passionate environmentalist, has nearly finished a retrofit of his house to the stringent engineering standards of the Passivhaus model, a German system of homebuilding that uses insulation and highly efficient doors and windows to save energy.

The finished 2,000-square-foot home could be warmed even in the dead of winter with a pair of small space heaters, Brazelton said, though the family plans to piggyback on their hot water heater and use an in-floor heating system in the basement.

The project is the renovation of an existing home to meet EnerPHit standard for energy performance. EnerPHit is a subset of the Passive House standard (hence the PH), which is an energy performance standard that requires very high levels of energy efficiency.  The Passive House Institute has a summary:

A Passive House is a very well-insulated, virtually air-tight building that is primarily heated by passive solar gain and by internal gains from people, electrical equipment, etc. Energy losses are minimized. Any remaining heat demand is provided by an extremely small source. Avoidance of heat gain through shading and window orientation also helps to limit any cooling load, which is similarly minimized. An energy recovery ventilator provides a constant, balanced fresh air supply. The result is an impressive system that not only saves up to 90% of space heating costs, but also provides a uniquely terrific indoor air quality.

Passive House is a performance standard, meaning it doesn’t specify design features like LEED, but has performance characteristics that the building must meet after construction is complete.  Namely an airtight building shell at  ≤ 0.6 ACH @ 50 pascal pressure measured by a blower door test and a total heating & cooling demand of <4.7 kBtu/sq ft/yr.  Total energy use needs to be ≤ 38.1 kBtu/ft2/yr.

In layman’s terms, this means Passive House designs are 11 times more airtight than a conventionally designed and built modern home.  As for energy use, a typical single family detached home uses 76 kBtu/sq ft/yr.  My own house was built in the 1920′s and currently has no wall insulation.  In 2010, we used 89 kBtu/sq ft/yr in total, and I think we’re fairly frugal with our electricity.  That means when the Brazelton family finishes their home, it will use less than half the total energy of my house and be 15% larger.

The Passive House standard doesn’t require or depend on renewable energy to achieve this high energy performance.  It’s focused on minimizing, to the greatest extent possible, the loss of heat and capitalizing on natural heat sources like sunlight and even body heat.  The MinnePHit house will be renewable-ready, but it won’t have renewables to start with.  Paul, the owner, puts it eloquently:

 …we decided to use our limited resources in building a house with the highest level of efficiency and durability.  If maintained correctly, solar panels can last decades.  On the other hand, insulation can last centuries.  Looking again at the long term, the best investment is using less energy, not alternate energy.

Last but not least, this home is energy efficient because it is location efficient, located in South Minneapolis with nearby access to jobs, recreation and services.  The Brazelton’s definitely don’t have to use an automobile for every trip, and they likely won’t be traveling far to their destination.  The other local example of Passive House design can’t make that claim.

St. Paul’s electric vehicle charging stations

St. Paul has a nice video introducing their electric vehicle charging infrastructure.  According to a presentation I saw at MNAPA, the City hopes to have 150 public stations available by 2015.  They also estimated that the cost for installation was anywhere between $850 in parking ramps to $6,000 in on-street spaces.

Upload shapefiles to Google Fusion Tables

Google has a pretty cool labs project called Fusion Tables, which I think most people don’t know about.  One great feature is the ability to create a web map from georeferenced data quickly and easily.  Great news, it’s getting even easier.  From Steven Vance, news that you can now upload shapefiles (through a third-party site).

It is now possible to upload a shapefile (and its companion files SHX, PRJ, and DBF) to Google Fusion Tables (GFT).

Before we go any further, keep in mind that the application that does this will only process 100,000 rows. Additionally, GFT only gives each user 200 MB of storage (and they don’t tell you your current status, that I can see).

  1. Login to your Google account (at Gmail, or at GFT).
  2. Prepare your data. Ensure it has fewer than 100,000 rows.
  3. ZIP up your dataX.shp, dataX.shx, dataX.prj, and dataX.dbf. Use WinZip for Windows, or for Mac, right-click the selection of files and select “Compress 4 items”.
  4. Visit the Shape to Fusion website. You will have to authorize the web application to “grant access” to your GFT tables. It needs this access so that after the web application processes your data, it can insert it into GFT.
  5. If you want a Centroid Geometry column or a Simplified Geometry column added, click “Advanced Options” and check their checkboxes – see notes below for an explanation.
  6. Choose the file to upload and click Upload.
  7. Leave the window open until it says it has processed all of the rows. It will report “Processed Y rows and inserted Y rows”. You will be given a link to the GFT the web application created.
Here is a web map I made of Minneapolis bike count figures over time, the old fashioned way (geocoding by hand).  You could also get this to work before by exporting KML from ArcGIS an importing it into Fusion Tables, but that was clunky and had inconsistent results. Google should incorporate this quickly, if they want to keep up with what you can do with ArcGIS Online.

Not-so-smart cities

First, I want to say I totally agree with the last half of the last sentence in Greg Lindsay’s opinion piece in the New York Times:

…the smartest cities are the ones that embrace openness, randomness and serendipity — everything that makes a city great.

The rest of the piece I don’t quite get.  Lindsay objects to the new city being built in New Mexico which will have no residents, but be used solely for testing “smart city” technology like “smart power grids, cyber security and intelligent traffic and surveillance systems”.  He objects because he feels computer simulations are not robust enough to capture human’s inherent “randomness”.  To support his case, he uses an example of a RAND corporation study, from 1968 (!), that failed to “smartly” reconfigure fire service.

Take the 1968 decision by New York Mayor John V. Lindsay to hire the RAND Corporation to streamline city management through computer models. It built models for the Fire Department to predict where fires were likely to break out, and to decrease response times when they did. But, as the author Joe Flood details in his book “The Fires,” thanks to faulty data and flawed assumptions — not a lack of processing power — the models recommended replacing busy fire companies across Brooklyn, Queens and the Bronx with much smaller ones.

What RAND could not predict was that, as a result, roughly 600,000 people in the poorest sections of the city would lose their homes to fire over the next decade. Given the amount of money and faith the city had put into its models, it’s no surprise that instead of admitting their flaws, city planners bent reality to fit their models — ignoring traffic conditions, fire companies’ battling multiple blazes and any outliers in their data.

The final straw was politics, the very thing the project was meant to avoid. RAND’s analysts recognized that wealthy neighborhoods would never stand for a loss of service, so they were placed off limits, forcing poor ones to compete among themselves for scarce resources. What was sold as a model of efficiency and a mirror to reality was crippled by the biases of its creators, and no supercomputer could correct for that.

First, any good planner or engineer will tell you that models and software should be a starting point, not a finishing point.  I have no doubt that any new technology that comes out of the Center for Innovation, Testing and Evaluation (that is the new city’s name) will be refined in the real world as it’s performance among us mammals is tested.  If the RAND corporation couldn’t (or wouldn’t) adjust in 1968, they were bad planners.

Second, we shouldn’t use technology because politics could get in the way?  Don’t fault technology, fault bad process and implementation.  Also, where does this line of reasoning lead us?

Third and finally, this is the only example Lindsay gives of a failure of “smart” systems in the real world (except for a reference to something Jane Jacobs said), and it occurred in 1968.  Lindsay omits the myriad “smart city” technologies that are already commonplace and are generally deemed to have net positive impacts.  Here is a partial list (and I’m no expert):

And coming soon:
None of the second list, and I’m pretty sure none of the first list (at least computerized versions thereof) even existed in 1968.  The fact that many of these systems currently exist, and regularly operate without massive failure, seems to refute Lindsay’s assertion that we shouldn’t continue to develop them.