Print Page   |   Contact Us   |   Sign In   |   Register
Complexity Matters
Blog Home All Blogs
Search all posts for:   

 

View all (416) posts »
 

How Smart Can a Smart Car Be?

Posted By Prucia Buscell, Thursday, November 29, 2012

Imagine a school bus full of 40 children crosses the path of your driverless car, with both vehicles speeding along at 50 miles per hour. Would the computer operating your car protect the car-and you-or the 40 kids?

In his New Yorker piece "Moral Machines," Gary Marcus poses that question and it’s not idle musing. Google’s self-driving cars are legal on the road, with some special stipulations, in three states. While no laws actually prohibit cars without human drivers, Google has lobbied for specific legal status for its auto-driven vehicles, and legislatures in California, Florida and Nevada have complied.

Google co-founder Sergey Brin says in a SFGate story that driverless cars will save time commuters now waste sitting in traffic, improve traffic flow, reduce congestion and pollution, and ultimately save lives. A U.S. Department of Transportation study finds operator error a factor in 80 percent of motor vehicle accidents. And as Marcus notes in his essay, automated vehicles may one day be mandated for safety, because computers don’t get tired, distracted, angry or drunk. Road rage would become obsolete.

A New York Times story by John Markoff describes how a self-automated Prius equipped with a variety of sensors and a route programmed into its GPS navigation system successfully merged into fast-moving California highway traffic, obeyed the speed limit, stopped appropriately, and avoided contact with other vehicles. The car can even be programmed for different driving personalities, the story says, from cautious, which means it is more likely to yield to another car, to aggressive, which means it is more likely to go first. Experts think the technological, legal and insurance issues raised by driverless care are likely to be resolved in the foreseeable future.

But can the operating computer be developed to make all the judgments a human driver needs to make? Would it protect one occupant in its vehicle or the 40 kids in the bus? Could it distinguish a loose shopping cart from a runaway stroller carrying a baby?

Marcus notes that  Human Rights Watch has proposed a ban on development and use of fully automated killer robots in warfare, and he thinks that’s unrealistic considering development so far. But he also thinks we’re a long way from developing computer generated ethical decisions. For one thing, human ethics aren’t always that great. "As yet, we haven’t figured out how to build a machine that fully comprehends the concept of ‘dinner,’ much less something as abstract as ‘harm’ or ‘protection,’” he writes in the New Yorker essay. Building machines with a conscience, he says, will require the "coordinated efforts of philosophers, computer scientists, legislators and lawyers.” But he doesn’t dismiss that eventuality. Ethical behavior by machines may sound like science fiction, he says, but not so long ago people would thought that about driverless cars.

This post has not been tagged.

Share |
Permalink | Comments (0)
 
Association Management Software Powered by YourMembership.com®  ::  Legal