Robot cars with morals

From the New Yorker via the Transportationist.

The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation). What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

Building machines with a conscience is a big job, and one that will require the coordinated efforts of philosophers, computer scientists, legislators, and lawyers. And, as Colin Allen, a pioneer in machine ethics put it, “We don’t want to get to the point where we should have had this discussion twenty years ago.” As machines become faster, more intelligent, and more powerful, the need to endow them with a sense of morality becomes more and more urgent.

“Ethical subroutines” may sound like science fiction, but once upon a time, so did self-driving cars.

Below are some thoughts about this article I posted on Google+, reposted since I have a suspicion it’s a ghost town.

It’s somewhat fascinating that we’ll even have this problem, since we have no standardized “ethical subroutine” in our current social contract, even for driving.  We generally just tell people to try not to crash.  We have signs and rules, but I was never told in drivers ed which way to swerve when I encounter an oncoming vehicle in a narrow mountain pass in order to minimize loss of life (especially if that swerving meant self-sacrifice).  Likewise, we have no “right” answer we teach in elementary school about how to solve the trolley problem.  We’re ok with ambiguity and possible less utilitarian outcomes when humans do it, but not when our creations do it.

To take it a step further, suppose we developed an ethical subroutine that generally as a society we felt comfortable with for robot cars.  Would we then enforce conformance with that subroutine on the remaining human drivers?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s