Kevin Corbett – #EdTech

educational technology (#edtech) #elearning

Menu
  • Topics
    • Artificial Intelligence (AI)
    • Augmented Reality (AR)
    • Blended Learning
    • Blended Learning Videos
    • Creativity
    • Educational Change
    • Future Tech
    • Gamification
    • Higher Education
    • Internet of Things (iot)
    • Internet Safety
    • Liberal Arts
    • Minecraft
    • Mobile Learning
    • Neural Network
    • Online Learning
    • Robotics
    • Social Media
    • Virtual Reality (VR)
  • #EdTech News
  • About Me
    • About Kevin
    • Classes
    • Contact Me
Menu
irony-teaching-computers-human-values_kevin-corbett

Using Stories to Teach Human Values to Artificial Agents

Posted on February 20, 2016October 17, 2018 by kevin

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans.

Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained.

But how can robots learn ethical behavior if there is no “user manual” for being human?

Using-Stories-To-Teach-Human-Values

Using Stories to Teach Human Values to Artificial Agents (RESEARCH REPORT) download-research

Using Stories to Teach Human Values to Artificial Agents

ATLANTA — Feb. 12, 2016 — The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” – to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 – 17).

Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies. “The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels, and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab.

“We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research – the Scheherazade system – which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet. Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning.

In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could

a) Rob the pharmacy, take the medicine, and run;
b) Interact politely with the pharmacists or,
c) Wait in line.

Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task.

With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.

Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.

The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

Download the complete research paper

This project was or is sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA) under grant #D11AP00270 and the Office of Naval Research (ONR) under grant #N00014-14-1-0003. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA or the ONR.

Source: Georgia Tech University

Recent Posts

  • 100 of the Best Quotes About Reading
  • Elon Musk: Warns Against Killer Robots
  • Gamification In Genetics
  • Dr. Joyce Stewart Everett School District Honored
  • Flipped Classroom Bridges the Gap To Gen Y

Categories

  • Artificial Intelligence (AI)
  • Augmented Reality (AR)
  • Blended Learning
  • Creativity
  • deep learning
  • Educational Change
  • Future Tech
  • Game Based Learning
  • Gamification
  • Higher Education
  • Higher Order Thinking
  • Hololens
  • Infographic
  • Internet of Things
  • Internet Safety
  • Liberal Arts
  • Minecraft
  • Mobile Learning
  • neural network
  • News
  • Online Learning
  • Robotics
  • Social Media
  • Technology
  • Uncategorized
  • video
  • Virtual Reality (VR)

Archives

  • November 2018
  • August 2017
  • July 2017
  • June 2017
  • March 2017
  • February 2017
  • January 2017
  • October 2016
  • September 2016
  • August 2016
  • April 2016
  • March 2016
  • February 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • April 2015
  • March 2015
  • December 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • March 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • March 2012
  • November 2011
  • October 2011
  • March 2011
  • February 2011
  • July 2010
  • March 2010

Tag Cloud

#highered BYOD college college cost college debt college debt crisis Coursera e-learning edtech education educational educational technology elearning facebook flipped classroom game based learning games Gamification Google Higher Education infographic Internet internet of things Internet Safety iot ipad LEARNING liberal arts MIT mlearning MOBILE mobile devices Mobile Learning mooc online Online Learning People Safety school STUDENT Teacher Tech tuition twitter video
©2022 Kevin Corbett – #EdTech | Theme by SuperbThemes