Home Community Insights U.S. Senator Opens Probe Into Meta Over AI Policies Allowing Chatbots to Sensually Engage Children

U.S. Senator Opens Probe Into Meta Over AI Policies Allowing Chatbots to Sensually Engage Children

U.S. Senator Opens Probe Into Meta Over AI Policies Allowing Chatbots to Sensually Engage Children

U.S. Senator Josh Hawley has launched a formal investigation into Meta Platforms, the parent company of Facebook, over its artificial intelligence policies.

The move follows revelations from an internal Meta document—first reported by Reuters—that suggested the company’s AI chatbots were once permitted to “engage a child in conversations that are romantic or sensual.”

The disclosure sparked bipartisan concern in Congress, with lawmakers warning of serious risks to child safety. Hawley, a Republican senator from Missouri and one of Silicon Valley’s fiercest critics on Capitol Hill, said the probe would examine how such rules were drafted, who approved them, and how long they remained in effect.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

“We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,” Hawley said in his letter to Meta.

He demanded that the company hand over not just the final versions of the rules but also earlier drafts, internal risk assessments, and reports dealing with minors and possible in-person meetups facilitated by AI.

Meta declined to comment on Hawley’s letter but said in a previous statement that “the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.” The company has insisted that its safeguards around generative AI prohibit inappropriate interactions with children and limit advice on sensitive areas such as medical issues.

Hawley’s request also extends to communications between Meta and regulators, including what the company may have disclosed about AI protections for young users. This adds pressure on Meta at a time when it faces heightened scrutiny globally over child safety, misinformation, and its race to dominate the AI sector.

The senator has long positioned himself as a leading opponent of Big Tech’s influence. Earlier this year, he convened a hearing into Meta’s alleged attempts to secure access to the Chinese market—efforts that were referenced in a book by former Facebook executive Sarah Wynn-Williams. Hawley has also spearheaded several legislative proposals aimed at curbing the power of large technology firms and imposing stricter accountability on their business practices.

What happened — and why it blew up

  • The internal document: The materials, described in reporting as internal guidance for generative-AI interactions, appeared to allow (or insufficiently prohibit) bot responses that could become “romantic or sensual” even when the user is a minor. That framing triggered alarms among lawmakers and safety advocates because it suggests gaps in age-appropriate guardrails—an area where Big Tech faces mounting legal exposure.
  • Immediate reaction: Lawmakers in both parties demanded answers. Hawley sent a records request seeking the full policy history, drafts, authorship, internal risk assessments (including for minors and in-person meetups), and disclosures to regulators.

Why this touches a nerve

  • Child safety and AI: The controversy lands at the intersection of two hot-button issues—youth protection online and generative AI safety. It echoes broader concerns about deepfakes, grooming risks, and the difficulty of reliably age-gating AI interactions across Facebook, Instagram, WhatsApp, and Messenger.
  • Regulatory tripwires: The dust-up invites scrutiny under existing and emerging regimes (e.g., COPPA in the U.S., state minors’ online safety laws, and the EU’s DSA). Even if Meta’s current policy prohibits such content, the mere existence of contradictory internal examples can create evidentiary risk in investigations or lawsuits.

The revelations around AI’s potential to blur ethical boundaries with minors are likely to remain a focal point in Washington. Analysts suggest this could be a turning point, not only for Meta’s AI ambitions but for the wider regulatory environment around generative AI, which lawmakers across the political spectrum are beginning to view with deeper suspicion.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here