New MIT Social Intelligence Algorithm Helps Build Machines That Better Understand Human Goals

New MIT Social Intelligence Algorithm Helps Build Machines That Better Understand Human Goals

In a classic experiment on human social intelligence by psychologists Felix Warneken and Michael Tomasello (see video below), an 18-month old toddler watches a man carry a stack of books towards an unopened cabinet. When the man reaches the cabinet, he clumsily bangs the books against the door of the cabinet several times, then makes a puzzled noise.

Something remarkable happens next: the toddler offers to help. Having inferred the man’s goal, the toddler walks up to the cabinet and opens its doors, allowing the man to place his books inside. But how is the toddler, with such limited life experience, able to make this inference? 

Recently, computer scientists have redirected this question toward computers: How can machines do the same? 

The critical component to engineering this type of understanding is arguably what makes us most human: our mistakes. Just as the toddler could infer the man’s goal merely from his failure, machines that infer our goals need to account for our mistaken actions and plans. 

In the quest to capture this social intelligence in machines, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Department of Brain and Cognitive Sciences created an algorithm capable of inferring goals and plans, even when those plans might fail. 

This type of research could eventually be used to improve a range of assistive technologies, collaborative or caretaking robots, and digital assistants like Siri and Alexa. 

Machines That Understand Human Goals

“This ability to account for mistakes could be crucial for building machines that robustly infer and act in our interests,” says Tan Zhi-Xuan, PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS) and the lead author on a new paper about the research. “Otherwise, AI systems might wrongly infer that, since we failed to achieve our higher-order goals, those goals weren’t desired after all. We’ve seen what happens when algorithms feed on our reflexive and unplanned usage of social media, leading us down paths of dependency and polarization. Ideally, the algorithms of the future will recognize our mistakes, bad habits, and irrationalities and help us avoid, rather than reinforce, them.” 

To create their model the team used Gen, a new AI programming platform recently developed at MIT, to combine symbolic AI planning with Bayesian inference. Bayesian inference provides an optimal way to combine uncertain beliefs with new data, and is widely used for financial risk evaluation, diagnostic testing, and election forecasting. 

The team’s model performed 20 to 150 times faster than an existing baseline method called Bayesian Inverse Reinforcement Learning (BIRL), which learns an agent’s objectives, values, or rewards by observing its behavior, and attempts to compute full policies or plans in advance. The new model was accurate 75 percent of the time in inferring goals. 

“AI is in the process of abandoning the ‘standard model’ where a fixed, known objective is given to the machine,” says Stuart Russell, the Smith-Zadeh Professor of Engineering at the University of California at Berkeley. “Instead, the machine knows that it doesn’t know what we want, which means that research on how to infer goals and preferences from human behavior becomes a central topic in AI. This paper takes that goal seriously; in particular, it is a step towards modeling — and hence inverting — the actual process by which humans generate behavior from goals and preferences.”

Share

Leave a Reply

Your email address will not be published. Required fields are marked *