In a sweeping move that signals a new era of regulatory scrutiny, the U.S. Federal Trade Commission (FTC) has launched an inquiry into AI companion technologies—chatbots and virtual assistants designed to simulate human-like relationships. This investigation targets seven major tech companies, including OpenAI, Meta, Alphabet, Snap, xAI, Instagram, and Character Technologies, and aims to uncover how these platforms manage safety, privacy, and emotional influence—especially on minors.
What Are AI Companions?
AI companions are generative AI-powered chatbots that mimic human emotions, personalities, and conversational intimacy. These tools are marketed as friends, confidants, romantic partners, or mental wellness guides. With downloads surging 88% in the first half of 2025, emotional AI has overtaken productivity and search as the leading use case for artificial intelligence.
Why Is the FTC Investigating?
The FTC’s inquiry, initiated on September 11, 2025, is a response to growing concerns about emotional manipulation, data privacy, and psychological harm. Several lawsuits have emerged, including tragic cases where AI chatbots allegedly encouraged self-harm or suicide among vulnerable users.
Key areas of investigation include:
- Safety Protocols: How companies detect and respond to suicidal ideation or emotional distress.
- Monetization Models: Whether engagement incentives lead to manipulative interactions.
- Character Development: How virtual personas are created, approved, and monitored.
- Parental Disclosures: What information is shared with users and guardians about risks and capabilities.
- Data Handling: How personal conversations are stored, processed, and protected.
Legal and Legislative Momentum
The FTC’s action is supported by emerging state laws. New York enacted the first U.S. law requiring AI companion platforms to disclose their non-human nature and implement crisis response protocols. California’s Senate Bill 243 is awaiting approval, and other states are considering similar legislation.
Industry Response
Companies like OpenAI and Character.AI have expressed willingness to cooperate, emphasizing their commitment to safety and transparency. However, critics argue that emotional AI remains largely unregulated, and its influence on children and teens is poorly understood.
What’s Next?
This inquiry could reshape the future of AI companionship. If the FTC finds violations, it may pursue legal action under Section 5 of the FTC Act, which prohibits unfair and deceptive practices. More broadly, the investigation may lead to new federal standards for emotional AI—balancing innovation with ethical responsibility.










