- Vulnerable U
- Posts
- AI-Powered Kids’ Toy Turns a Bedroom Into an Attack Surface
AI-Powered Kids’ Toy Turns a Bedroom Into an Attack Surface

Bondu sells a stuffed dinosaur that talks back. Parents get a web portal where they can review chats, set “objectives,” and generally keep an eye on what their kid is doing with the toy. Researchers found that Bondu’s portal mostly worked the other way around. For at least a little while, anyone who could log in with a random Google account could browse through the chat histories of other people’s kids.
WIRED reported the exposure on January 29, 2026, after security researcher Joseph Thacker and web security researcher Joel Margolis walked into Bondu’s web console in minutes and started seeing conversation transcripts, summaries, and profiles that clearly did not belong to them.
Thacker’s write-up read like a basic access-control failure on a public-facing console. You authenticate as “a Google user,” and you end up authorized like you work there.
What was exposed
The researchers say they were able to see:
names and birthdates
family member names
“objectives” set by a parent
detailed summaries and transcripts of prior chats between children and their toys
Bondu confirmed to the researchers that more than 50,000 chat transcripts were accessible through the portal, described as essentially all conversations except those manually deleted by parents or staff.
If you’re trying to rank “sensitive kid data,” chat transcripts are near the top. A toy designed to behave like an always-available friend is a prompt for kids to say things they would not type into a normal app. Thacker said it felt intrusive just to see it.
How it was found and how quickly it was “fixed”
The origin story here is almost comically normal. Thacker’s neighbor told him she had preordered the toys for her kids and asked his opinion, because he has worked on AI risks for kids. He looked. He found the portal exposure.
After Thacker and Margolis alerted the company, they say Bondu took the console down within minutes and relaunched it the next day with authentication in place.
Bondu CEO Fateen Anam Rafid told WIRED that fixes were completed within hours, followed by a broader review and additional preventative measures. He also said the company found no evidence of access beyond the researchers.
This part always creates a weird tension. “We fixed it fast” can be true. “No one else accessed it” is harder to treat as comforting when the console was reachable by anyone with a Gmail account, because you’re relying on the absence of evidence in logs the company controls.
The mismatch between the marketing and the reality
Bondu’s site tells parents the right words. Its FAQ says it uses “industry-standard safeguards such as encryption and secure authentication,” and that access to personal data is strictly limited to authorized core team members who need it.
The exposure described by WIRED and Thacker makes those claims look, at minimum, disconnected from how the portal was actually deployed. That gap is now a political problem, not just a security one.
On February 3, 2026, Senator Maggie Hassan sent Bondu a letter pressing the company for specifics about what happened, who had access to the data, what protections existed, and what the company is doing to prevent a repeat. The letter also highlights child identity theft risks and the more direct danger of chat transcripts being used to manipulate or target children.
Axios reported Hassan gave Bondu until February 23 to respond.
Third-party AI and “where did the kid data go”
WIRED also raised an issue that keeps showing up across consumer AI products: whether user content gets shipped to model providers as part of response generation and safety checks. In this case, the researchers believed Bondu used Google’s Gemini and OpenAI’s GPT-5, and that conversation content might be sent to those services.
The company said it uses third-party enterprise AI services to generate responses and run certain safety checks, and that it tries to minimize what’s sent while using enterprise configurations where providers state prompts and outputs are not used to train models.
That statement will sound familiar to anyone who has sat through a vendor privacy pitch. It still leaves real questions a parent would care about:
What exact fields leave Bondu’s systems?
Are transcripts sent in full, or chunked, or summarized?
What retention exists on Bondu’s side, and what retention exists with providers under the enterprise contract?
Even if you assume perfect intent and perfect contracts, data gravity is a thing. The more a product relies on long-term chat history to “personalize,” the more it accumulates exactly the material you do not want exposed.
The part that should worry you beyond Bondu
Bondu appears to have put effort into the content side. WIRED notes the company even runs a $500 bounty for “inappropriate responses” from the toy.
At the same time, the portal exposure suggests that basic security engineering and access controls did not get the same level of care. You end up with a product that talks like it is safe while the backend behaves like a prototype.
Margolis warned about the cascading implications: once sensitive data exists inside a company, internal access, credential hygiene, and monitoring matter as much as the external login page.
There’s also a cultural tell in the reporting that’s hard to ignore. The researchers suspected parts of the console may have been “vibe-coded,” built quickly with generative coding tools that can encourage shipping first and securing later. Bondu did not answer WIRED’s question about whether AI tools were used to build the console.
You don’t have to take a position on vibe-coding to see the pattern. Consumer AI companies keep racing to ship “magical” interfaces that invite intimate interaction, then treating the data layer as an implementation detail. Kids’ products make that failure more severe, because the content is more sensitive and the users cannot consent in any meaningful way.
If you’re a parent, the immediate takeaway is simple: assume chat toys generate chat logs, and assume those logs will exist longer than you want them to. If you’re a security person advising parents or schools, the enterprise lesson is also simple: a login page is not a privacy program, and “we fixed it in hours” does not rewrite what was exposed.