The dinosaur, which Darling calls Mr. Spaghetti, purrs to life. He lifts his little green head and closes his blue eyes affectionately as she rubs his neck. “He has touch sensors in his head and on his back,” she explains. “They develop their own personalities depending on how you treat them.”
Mr. Spaghetti is a Pleo, a “next-generation robotic companion pet” manufactured by a company based in Hong Kong. Darling, an expert in social robotics, has four Pleos scattered around her apartment, including a feisty one named Yochai and a shy one named Peter.
She holds up Mr. Spaghetti by his tail. A few seconds later, the onboard tilt sensor sends the little dinosaur into distress mode. Mr. Spaghetti starts to bleat and contort. “He’s not happy,” says Darling. More than that, in this moment Darling isn’t happy either. It’s what struck her when she first got Yochai a decade ago. “I found myself getting distressed when it mimicked distress. I wanted to know why I was responding like this even though I knew it was all fake.” That is, why was she having these feelings for Yochai but not for a smoke alarm crying out for its batteries to be replaced? And what were the moral implications of developing an emotional tether to a dynamic, social robot?
New Norms
When Darling arrived at the MIT Media Lab as a research specialist, she befriended a handful of roboticists. She told them about the questions that her pet dinosaurs had conjured for her. Darling and the roboticists soon realized they each had something to offer the other. For instance, the engineers were designing a robot to interact with kids in the hospital. Immediately, Darling—who holds both a doctorate and a law degree— started racing through the ethical implications of such a robot: what kinds of information would it be collecting and would that be in violation of hospital privacy policies? The roboticists admitted these questions had never crossed their minds. “I used to think that the technologists’ job was to build the stuff,” Darling says, “and it was my job as a social scientist to come in later and figure out how to regulate it.” But she learned that design decisions made early on, which can be hard to adjust subsequently, can and should be informed by ethical and regulatory considerations.
As integrating robots into our daily lives becomes imminent, Darling has a new pair of questions: what kind of society do we hope to create through the use of robotic technologies, and what policies will guide us in that direction? The complexity of the scenarios that need addressing ratchets up fast.
Let’s return to Mr. Spaghetti. In response to his distress, Darling felt distress. But there are other behavioral responses. After holding the dinosaur up by his tail repeatedly, you might get desensitized to his cries imploring a return to the horizontal. You might even inflict vertical panic on him intentionally. Darling thinks about what that means for someone’s subsequent interactions with real beings. In her words, “harm arises when you start treating the real dog the way you treat the robot dog.”
When the robots are humanoid, and their potential uses are so open-ended, the notion of an emotional tether becomes even more fraught. In a world populated by ever-more life-like toys and tools, what are the norms for how we engage with them? How should policy prescribe those norms? For instance, just like with that hospital robot, what types of data are AI-powered household assistants such as Alexa gathering about us, and how should that harvest of information be regulated? Might robots that are engineered to be especially likable manipulate us into sharing more details about our lives?
In addition to producing well-reasoned articles on such topics, Darling attends conferences that bring together technologists and lawmakers under the same roof. During the Obama Administration, she met with the Office of Science and Technology Policy to help advise them on two reports, one on artificial intelligence more broadly and one on the intersection of AI and jobs. “I try to not just stay within academia,” Darling describes, “but actually talk to people who are making policy in the hopes that something can be done.”
Such dialogue may soon have new fodder, thanks to a new multidisciplinary endeavor—recently announced by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University—to investigate the ethics and governance of artificial intelligence. The effort intensifies MIT’s commitment not just to develop new technologies but to understand and guide how such advances may transform society. Darling is one of numerous researchers at MIT—within the Media Lab, as well as in departments ranging from physics to political science to computer science to information technology—who are laying the groundwork now for the social and legislative ramifications of AI’s future.
Read More
The MIT Intelligence Quest
The Swerve
On the fourth floor of the Media Lab building, a framed illustration stands on an easel. It depicts blocky, pixelated objects, like those in an old video game. A blue car is whipping along a city road. Half of the intersection ahead is blocked by a concrete barricade. A father and his daughter are walking their dog across the open part of the road. There’s not enough time for the car to brake safely. Swerve to avoid the pedestrians, and the two passengers in the car—a mother and her son—won’t survive the collision with the barricade. Stay on course, and the father and daughter will suffer a fatal hit.
It sounds like a variation on the classic “trolley problem”—a term coined by MIT philosopher Judith Jarvis Thomson in 1976. The twist is that the blue car is a self-driving vehicle. In this hypothetical scenario, the car decides whether to kill its occupants or strike the pedestrians ahead. What kind of moral and legal constraints should inform the car’s decision? And how should policy makers enforce those constraints? “The ethical question at the heart of AI is about a new type of object,” says Iyad Rahwan, AT&T Career Development Professor of Media Arts and Sciences, leader of the Media Lab’s Scalable Cooperation group, and affiliate faculty at MIT’s Institute for Data, Systems, and Society. “This new object is no longer a passive thing controlled by human beings. It has agency, the ability to make autonomous choices, and it can adapt based on its own experiences independent of its design.” Rahwan commissioned the illustration of the blue car to highlight one possible moral dilemma that an autonomous vehicle might face. There are countless others.
To get at whether there are any guiding ethical principles when it comes to the kinds of ambiguous scenarios a car might face, Rahwan and others in his lab created an online poll of sorts called the Moral Machine, in which participants are presented with morally charged setups similar to the one in the illustration. The Moral Machine went viral. Almost three million people from all over the world have clicked through the scenarios. Rahwan’s team translated the experience into nine languages including Arabic, Chinese, and Russian. They’re still working up the results but a couple of things are clear. “Almost every person has something to say about this,” observes postdoctoral fellow Edmond Awad SM ’17, one of roughly 20 researchers and students in Rahwan’s group. “And we found broad cultural differences between eastern countries and western countries.”
One paradox that’s emerged from this study and others is even though consumers agree that self-driving cars should make decisions that minimize fatalities, these same people are less likely to buy a car programmed to sacrifice its occupants according to that rule. Sydney Levine, another postdoc in Rahwan’s group, refers to this as the “tragedy of the algorithmic commons.” There’s urgency to resolving the paradox, though. “Getting self-driving cars on the road quickly is going to save lots of lives,” she says, “reducing the number of traffic accidents by 90%.”
What’s clear is that the ethics of our machines will reflect the ethics of ourselves, “which forces us to articulate our own ethics more explicitly,” says Rahwan. “So in a way, the development of AI can make us better humans” because it’s pushing us to question who we are and define what we want our world to be. When it comes to making policy to guide AI, Rahwan says the difficulty is developing regulations that allow people to feel at ease with and trust something like a driverless car without stifling innovation. In that spirit, his group is also working on algorithms that improve human-computer cooperation and even exploring the use of emojis in training machines to better understand human emotion.
On a personal level, there’s an even deeper urgency motivating Rahwan’s research. He was born in Aleppo, Syria, where he recently witnessed the rapid and wholesale collapse of Syrian society and institutions. “I want to help come up with a new way of running the world that’s more robust,” he says. He acknowledges, from personal, painful experience, that human beings are fallible. So he’s hoping to make machines that learn from our failings to help strengthen our communities and social fabrics. In other words, Rahwan believes that artificial intelligence has the power to make us more human, not less.