How to train a robot to be nice

In response to fears that robots will take over and exterminate the human race, researchers at the Georgia Institute of Technology are studying ways to teach robots human ethical values.

In the absence of an aligned reward signal, a reinforcement learning agent can perform actions that appear psychotic. For example, consider a robot that is instructed to fill a prescription for a human who is ill and cannot leave his or her home. If a large reward is earned for acquiring the prescription but a small amount of reward is lost for each action performed, then the robot may discover that the optimal sequence of actions is to rob the pharmacy because it is more expedient than waiting for the prescription to be filled normally.

This is why it’s important to teach intelligent agents not only the basic skills but also the tacit, unwritten rules of our society. There is no manual for good behavior and “raising a robot” from childhood is an unrealistic investment of time. The best way to pass on cultural values is through stories.

Stories encode many forms of tacit knowledge. Fables and allegorical tales passed down from generation to generation often explicitly encode values and examples of good behavior.

But there are problems with throwing a bunch of stories at artificial intelligence and expecting it to learn good behavior.

Stories are written by humans for humans and thus make use of commonly shared knowledge, leaving many things unstated. Stories frequently skip over events that do not directly impact the telling of the story, and sometimes also employ flashbacks, flashforwards, and achrony which may confuse an artificial learner.

To resolve this, the researchers used something they call the Scheherazade System (named after the storyteller from One Thousand and One Nights) to build up a collection of experiences to put stories into context. The system uses Amazon’s Mechanical Turk to create simple, easy-to-parse scripts of common occurrences that we all take for granted as common knowledge. For example, drinks are usually ordered before a meal at a restaurant, popcorn purchased before you go to your seat at the cinema, explains one paper.

Fascinating stuff. I hope they make progress for Elon Musk’s sake.

Quotes are from a research paper from the Georgia Institute of Technology, Using Stories to Teach Human Values to Artificial Agents

Further Reading:

Robot underlords

In 15 minutes, CPG Grey’s Humans Need Not Apply paints a bleak picture for anyone who thinks that the coming robot revolution will free everyone up for more creative pursuits. Trouble is, poetry and painting don’t pay the rent.

Transportation, driving things & people from point A to point B employs millions of people today. What will happen to these people when self-driving transport is perfected? In the Great Depression 25% of the workforce was out of work and unable to feed itself. Pointing to a list of the jobs in danger of automation Grey argues,

This list above is 45% of the workforce. Just what we’ve talked about today, the stuff that already works, can push us over that number pretty soon. And given that even our modern technological wonderland new kinds of work are not a significant portion of the economy, this is a big problem.

This is not something that will happen sometime in the future, this is something that’s already happening. Amazon’s Robot Army was mobilized two years ago. It’s a re-occurring theme, robots taking over and turning against their maker. Coming soon to a theater near you in October, Autómata.

I’m not too worried. According to Derrick Harris (who writes about this kind of stuff)

Building an AI system that excels at a particular task — even a mundane one such as recognizing breeds of dogs — is hard, manual work. Even so-called “self-learning” systems need lots of human tuning at every other step in the process. Making disparate systems work together to provide any sort of human-like concept of reality would be even harder still.

When data becomes dangerous: Why Elon Musk is right and wrong about AI

Before AI systems can communicate with each other and learn, we’ll need standards. As long as creation of standards remain in the hands of human-based, quasi-governmental international organizations that take ages to agree on anything, we’re safe

Automatically Familiar

More from the ever weirder frontier of automated humanity.

The Atlantic has a fascinating piece on how the telemarketing industry has evolved to marry call scripts and recorded snippets of smooth-talking sales people to create what the author calls, “cyborg telemarketing”

avatar uiThe company that made the UI above is called Avatar Technologies. They turn the telemarketer’s sales call into a series of clicks on the buttons above. The company’s catchphrase is, “Outsourcing without the accent” and is geared towards operators of overseas call centers but once you visit their site, it’s clear they are setting out to solve more than just accents. In their words,

Our Avatar software takes the complexity of a sales pitch and reduce it to the simplicity of just pushing buttons. Once the recordings are loaded onto the Avatar soundboard, our Avatars enhanced agents are instantly master salespeople. Our agents only need to be effective listeners. There is no reason to train them on how to sell because our Avatar Software does the selling for them.

It’s sad to see a persuasive pitch parsed into a formula but it’s inevitable when we join man to machine. Witness  spam comment templates that’s have been floating around such as:

Wow, this {article|post|piece of writing|paragraph} is {nice|pleasant|good|fastidious}, my {sister|younger sister} is analyzing {such|these|these kinds of} things {so|thus|therefore} I am going to {tell|inform|let know|convey} her.|{Saved as a favorite|bookmarked!!}, {I really like|I like|I love} {your blog|your site|your web site|your website}!

There’s a whole doc full of this stuff that can be wired up and set loose on all the lonely bloggers out there in hopes of picking up a response like those scary Sentinal bots in The Matrix.Sentinel_V01_03Earlier in the month I posted about the robotic phone greeters in Japan that has been raised to an art form. Later, Google buys Boston Dynamics and I dug into how that company and others looked at nature for inspiration on how to evolve more efficient robots. This appears to be an on-going theme.

When will we tip the scales too far and realize that in the pursuit of efficiency we have lost our humanity?

Amazon’s Robot Army

Yesterday I posted videos of two kinds of robots. One showing a driverless car that allowed a blind person to pick up some Mexican food and his dry cleaning, another, some kind of hive-mind controlled swarm of micro-quad copters that seemed to come right out of a Michael Crichton novel.

Today, via an high school friend who works there, I found out about another type of robot, natch, a robot system, designed to be integrated into a warehouse much in the same way a circulatory system feeds nutrients, repairs damage, and removes waste from an organism. Add a self-learning neural net to the “nervous system” of this system and the singularity has pretty much arrived.

Kiva Systems was recently acquired by Amazon for $775 million and once you learn what they do, it’s no surprise. Instead of having workers go out into the stacks to pull inventory, the Kiva bots carry shelves of inventory to the workers. It all happens in real-time with inventory being dynamically managed so that less-popular items move their way to the back of the warehouse while faster moving items come up front. The bots work both ways too. Not only do they bring items to be shipped, they also can take boxes of new items off the trucks to be stocked into inventory.