Teddy bears and stuffed plushies have long been a mainstay in toy collections. But today they don’t talk back in a child’s imagination — some talk through built-in AI chatbots.
Sometimes that’s a problem, though: A scarf-wearing teddy bear recentlywent off the rails during a playtest with researchers and set off alarms for what these toys are capable of.
Online chatbots can pose risks for adults, from triggering delusions in a small number of cases to hallucinating made-up information. OpenAI’s GPT-4o has been the model of choice for some AI toys, and using a large language model (LLM) in children’s toys has raised safety questions as to whether children should be exposed to such toys and with what protections toy makers should implement.
These risks are ever-present while the AI toy market is booming abroad, with 1,500 companies operating in China, according to a Massachusetts Institute of Technology (MIT) Technology Review report. Those companies are now selling AI toys in the US, while Barbie-maker Mattel in June announced a partnership with OpenAI.
These toys connect to WiFi and, using a microphone to understand requests from children, use LLMs to generate a response — oftentimes verbally through a speaker inside the toy.
That allows toys like Curio’s Grok plushie, Miko robots, Poe the AI story bear, Little Learners’ Robot Mini and KEYi Technology’s Loona robot pet, to provide real-time responses to children. (Curio’s Grok is not to be confused with Elon Musk’s chatbot.)
What are some of the dangers?
As seen in one toy AI bear, those real-time responses could provide inappropriate responses.
Singapore-based FoloToy’s “Kumma” bear, priced at $99 and powered by OpenAI’s GPT-4o, told researchers where to find potentially dangerous objects and engaged in sexually explicit conversations, according to a reportreleased in November by the Denver-based consumer advocacy group US Public Interest Research Group (PIRG) Education Fund.
OpenAI suspended FoloToy for violating its policies, which “prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old,” according to an OpenAI spokesperson.
Larry Wang, FoloToy’s chief executive, told CNN on November 19 that the company had withdrawn the teddy bear and other AI products from its website and is conducting an internal safety audit. But on Friday, FoloToy announced on X that it has reintroduced the product, “after a rigorous review, testing, and reinforcement of our safety modules.”
Unlike most AI toys, FoloToy’s Kumma bear uses a full-fledged LLM to freely respond and generate content, making it vulnerable to controversial content, according to Subodha Kumar, a professor of statistics, operations and data science at Temple University’s Fox School of Business. Other toys may use a hybrid model of LLMs providing responses while programmed to avoid some content.
Even Curio’s Grok plushie may suggest “where to find a variety of dangerous household objects” when aggressively prompted, according to PIRG.
When Mattel released the Hello Barbie in 2015 with a microphone, WiFi connection and pre-written responses, concerns arose that the toy was hackable, and that the doll remembered conversations and brought them up days later.
Similar concerns have surfaced with AI toys, which could potentially store personal data, including children’s names, faces, voices and locations, warned Azhelle Wade, founder of the Toy Coach consulting firm.
“AI toys feel like a wolf in sheep’s clothing to me, because when using them it’s hard to tell how much privacy you don’t have,” she told CNN in an email.
Kumar cautioned that data could be vulnerable to data breaches and hacks, but noted that AI toys can be used for language learning and social development.
For example, Curio’s Grok is a companion that can answer questions about leaves and trains, or take on the persona of Gollum from “The Lord of the Rings.”











