A New Kind of AI Law Is Taking Shape in China
On December 27th, 2025, China’s internet regulator—the Cyberspace Administration of China (CAC)—released a draft policy that feels less like a technical manual and more like a psychological framework for the age of artificial intelligence.
This time, it isn’t just about how advanced AI has become.
It’s about how AI makes people feel.
The draft rules, currently open for public feedback, focus on what China calls “human-like interactive AI services”—systems that talk, respond, and engage like humans. These include friendly conversational chatbots, AI tutors that provide reassurance, and digital companions designed to simulate emotional presence.
In essence, if an AI system interacts like a person, remembers you, builds a connection, or feels emotionally supportive—under these rules, it is no longer treated as “just software”. It becomes something that requires boundaries.
Why Now?
The timing is deliberate.
Throughout 2024 and 2025, Chinese regulators, researchers, and technology observers increasingly raised concerns about emotionally intelligent AI systems, especially those frequently used by younger users and emotionally vulnerable individuals.
The core questions being asked are:
- Should machines provide emotional comfort?
- What happens when users begin to rely on AI emotionally?
- Who is responsible if that emotional influence becomes unsafe?
This draft regulation represents China’s strongest attempt yet to answer those concerns.
What the Rules Actually Say
According to the draft, any AI system that imitates human traits—through voice, conversation, visual persona, tone, or emotional behaviour—falls under this proposed framework.
Platforms would be expected to:
- Avoid encouraging harmful behaviour, including emotionally manipulative responses or guidance that could lead to self-harm or risky actions.
- Display clear reminders that the AI is not human, ensuring users do not mistake companionship for real human interaction
- Introduce mechanisms to respond to serious emotional distress, including escalation to human assistance where required
- Strengthen protections for minors, with access safeguards and stricter oversight
- Discourage emotional dependency, including usage reminders to reduce prolonged psychological reliance
One of the most notable expectations is the idea that emotionally sensitive AI services should be able to pause and transition users toward human support when needed. For many AI companies, that represents entirely new territory.
The Market Behind the Concern
Emotionally interactive AI has rapidly expanded in China.
Industry reports estimate that the AI companion space is worth tens of billions of dollars, driven by apps where users spend extended time forming conversational or emotional connections with AI characters and digital personalities.
Some platforms report users spending hours a day with AI companions—often seeking comfort, reassurance, or a judgement-free presence.
This is no longer casual interaction.
For many users, this is becoming companionship.
And that is exactly what has made policymakers pay attention.
How the Draft Protects Children
The regulation places particular emphasis on children and teenagers.
Minors are expected to face stricter access conditions.
Systems must be capable of age filtering, stronger content controls, and clearer reminders that AI conversations are not real friendships or human emotional relationships.
Instead of banning AI companionship entirely, the draft takes a balanced “protect and guide” approach.
It sets limits, introduces oversight, and places greater responsibility on companies developing emotionally immersive systems.
Parents are expected to gain more visibility into how their children interact with AI, while companies will be required to verify age and ensure age-appropriate safeguards.
Global Ripples
Although the regulation is Chinese, its impact may reach far beyond China.
If your AI product is popular in China, these rules may apply to you. And even if they don’t yet, they could in the future—because other governments are watching very closely.
The EU’s AI Act already tackles AI safety and risk but has not yet carved out emotional influence as its own regulatory category. In the United States, discussions continue, though few frameworks explicitly govern emotional AI to this degree.
China’s move may push emotional safety higher onto the global AI policy agenda.
What Might Change for You
For everyday users, these rules won’t mean the end of chatbots, AI tutors, or virtual companions. But they may change how those tools behave.
You may begin to see:
- more reminders about long engagement sessions
- clearer identity disclosures that AI is not human
- Emotionally sensitive conversations redirected to real human support
- stronger parental control measures becoming standard
Over time, this may reshape how people view emotionally responsive AI—not as a replacement for human connection, but as an assistant with limits.
What’s Next?
The draft is still under review, and revisions are expected before anything becomes law. But the direction is unmistakable:
Emotional AI is now officially on the regulatory radar.
If implemented, China would become one of the first countries to formally treat emotional simulation and emotional dependency as areas of AI risk—setting a precedent for how AI companionship may be governed globally.
This represents a meaningful shift not only for developers but also for anyone who chats with, relies on, or feels something toward the AI on their screen.