THINK TANK #4: How are AI Voice Assistants and Intelligent Toys Changing the Way Children Think and Behave? (new research)

Goal of this Think Tank

While adults envision the future of AI as focused on self-driving cars, personal assistants, and robot maids, children are more open to the imaginative possibilities. Their flexibility to see and interact with AI agents as entirely new entities is inspiring us to imagine and create novel forms of interaction.” — Stefania Druga

The line between intelligent machines and living things is narrowing with each passing day and quicker than we can understand how it’s changing us. But what about children? Toy manufacturers are struggling mightily to launch conversational AI devices for children amidst backlash from parents and advocacy groups: (1) When Your Kid Tries to Say ‘Alexa’ Before ‘Mama’  (2) Mattel Pulls Aristotle Children’s Device After Privacy Concerns  (3) Mattel Delays kids’ Voice Assistant Hello Barbie Hologram Until 2018

While the backlash against AI devices for children has focused primarily on privacy and data rights, the question that really begs to be answered is:

How might  conversational AI assistants change how children define themselves, their world, and their relationships compared to prior generations, and what does this say about the world we’re creating in the future?  

JOIN US as we explore this question and compelling new research published by Stefania Druga and Randi Williams (MIT Media Lab) that studied how children engage with AI devices. Guest speaker, Randi Williams, will present their research and discuss the methodologies behind testing how children engage with conversational AI devices. Attendees will then participate in an interactive discussion and problem-solving exercises on how we might ideally design safe AI bots for children and how technology developers, toy manufacturers, parents, and teachers might prepare and protect children.

Hey Google, is it OK if I eat you?

The focus of this Think Tank is a recent research paper published by Stefania Druga and Randi Williams, Hey Google, is it OK if I eat you?: Initial Explorations in Child-Agent Interaction (June 2017).

 Research Abstract:

How do children perceive and interact with autonomous technologies that are becoming more embedded in their daily lives? To answer this question we studied how 26 children (3-10 years old) interacted with these autonomous technologies: Amazon Alexa, Google Home, Anki‘s Cozmo, and NDI Development‘s Julie Chatbot. In the context of this paper, we refer to them as “agents“. After interacting with these agents, children answered questions about trust, intelligence, social entity, personality, and engagement. We analyze children‘s interactions and responses and identify four themes: perceived intelligence, identity attribution, playfulness and understanding. In the discussion, we address how different modalities of interaction may change the way children perceive intelligence and understand the world around them. We also propose a series of design considerations for future child-agent interactions around voice and prosody, interactive engagement and facilitating understanding.

Long-term research objectives are motivated by the following questions:  (1) How could exposure to, or interaction with, these smart bots affect children?  (2) What are the short- and long-term cognitive and civic implications?  (3) What design considerations could we propose to address the ethical concerns surrounding these issues?

The biggest surprise for me was that the things that impressed me, as an engineer, were often not as important to the children… many of the devices that are designed as toys treat the quality of their text-to-speech as an afterthought and our study suggests that it really matters to kids. It is important for the people creating these technologies to understand how children perceive these devices.” — Randi Williams 


The Think Tank will start promptly at 6:15pm. We do not allow any recording of the Think Tank except by our staff. Light food and beverages will be served.

  • 6PM – 6:15PM: sign in, networking, light food and drinks served
  • 6:15PM – 6:25PM: announcements, topic and speaker introduction
  • 6:25PM – 7:00PM: Presentation by Randi Williams
  • 7PM – 7:20PM: Interactive, problem-solving Think Tank
  • 7:20PM – 7:35PM: Informal presentations and feedback by Randi Williams
  • 7:35PM – 7:45PM:  Q&A with Randi Williams
  • 7:45PM – 8PM: More mingling and then it’s a wrap!

About Randi Williams

Randi Williams, Research Assistant, MIT Media Lab

Rand is currently pursuing a a PhD in Media, Arts, and Sciences with a concentration in Robotics and Human-Robot Interaction at MIT. She plans to use her research to develop assistive devices and collect behavioral information for well-being and educative applications.

She has worked at MIT Lincoln Laboratory, MIT’s Media Lab, NASA Jet Propulsion Lab, and Jawbone. At MIT Lincoln Laboratory she worked in the Decision Support and Informatics Group doing video analytics through machine learning. At MIT’s Media Lab she worked for the Fluid Interfaces group and developed web and mobile applications to help busy individuals automatically track key aspects of their day such as sleep, commute, movement, and location. She conducted research in the robotics department of NASA JPL as a Caltech Southern California Edison Company MURF Fellow. Additionally, she worked at Jawbone as a CODE2040 Fellow in the Tools Team (web development).

At UMBC, Randi founded and led hackUMBC, an organization that brought hackathons to UMBC’s campus.

She is passionate about helping others learn, and exploring other cultures and the experiences that shape people into who they presently are.

Twitter: @randi_c1 / LinkedIn:

Get Tickets

Powered by Eventbrite


Start Time

6:00 pm

January 11, 2018

Finish Time

8:00 pm

January 11, 2018


Infinito Gallery NYC, 79 Leonard Street, NY, NY