top of page

China Unveils Draft Regulations to Oversee AI That Mimics Human Personalities and Emotional Interaction

  • Dec 28, 2025
  • 4 min read

28 December 2025

In a significant move that could redefine the role of artificial intelligence in society, China has released a comprehensive set of draft rules aimed at regulating AI systems that simulate human behavior and interact with users on an emotional level, a policy shift that reflects Beijing’s deepening effort to balance technological innovation with societal stability and ethical safeguards. The draft, unveiled by the Cyberspace Administration of China for public comment, targets consumer-facing AI products and services that exhibit human-like traits such as personality, reasoning and communication styles in text, images, audio and video formats, and it lays out an ambitious regulatory framework intended to govern everything from how these systems interact with individuals to how companies must assess and mitigate potential risks associated with emotional engagement.


At the heart of the proposed rules lies a recognition that AI technologies capable of forging emotional connections with users whether through chatbots, virtual companions or other immersive interfaces present new challenges that go beyond the traditional concerns of data privacy and cybersecurity. Chinese regulators are particularly focused on addressing psychological risks, proposing measures that would require AI providers to monitor user behavior and emotional states and to intervene if signs of addiction, excessive reliance or extreme emotions are detected. This degree of oversight underscores Beijing’s cautious stance toward technology that can influence human behavior and suggests a desire to prevent outcomes that could undermine individual well-being or social harmony.


Under the draft regulations, service providers would be required to take responsibility for safety throughout the entire lifecycle of an AI product, from design and development to deployment and ongoing operation. Companies would need to implement robust systems for algorithm review, data security and the protection of personal information, reflecting broader concerns about the ethical and legal implications of increasingly sophisticated AI. With the public comment period open through late January 2026, stakeholders including industry participants, experts and ordinary citizens have the opportunity to weigh in on the details before the rules are finalized.


One of the most striking aspects of the draft is its emphasis on real-time intervention. If an AI system detects that a user is exhibiting signs of emotional dependence or distress, providers would be required to take corrective actions, ranging from issuing warnings about excessive use to actively stepping in to reduce harm. This reflects an unusual level of regulatory concern with psychological outcomes, a step that goes beyond most existing frameworks for AI governance in other parts of the world, where ethical guidelines tend to focus more narrowly on transparency, bias mitigation and data protection. China’s draft underscores the perception that AI with human-like interaction holds the capacity to shape users’ feelings and decisions in ways that warrant proactive oversight.


The draft also sets clear boundaries around content and conduct. AI services would be prohibited from generating material that threatens national security, spreads misinformation or promotes violence, obscenity or harmful behavior. By establishing these red lines, Chinese regulators are seeking to ensure that human-like AI systems not only avoid direct harms but also uphold broader social interests and values. The inclusion of national security concerns signals that AI policy in China is not just a matter of consumer protection but also intricately tied to state priorities and social governance.


Experts see the draft regulations as part of China’s larger strategy to steer the rapid rollout of sophisticated AI technologies while minimizing potential downsides. Because these human-interactive systems are increasingly prevalent in everyday life from customer service bots to virtual companions, the need to establish clear rules for how they operate and engage users is becoming more urgent. The draft’s call for lifecycle safety oversight, algorithm scrutiny and emotional risk assessment reflects a holistic view of AI governance that spans technical, psychological and ethical dimensions.


Industry observers note that China’s proposed rules are part of a broader global trend toward governing advanced AI systems in ways that prioritize user safety and ethical standards. In other regions, regulators have grappled with similar challenges, exploring how to balance innovation with accountability and how to ensure that AI does not inadvertently harm individuals or societal cohesion. China’s draft stands out for its explicit focus on emotional interaction and its requirement that AI systems not only identify but respond to signs of user distress or overreliance.


The draft also reflects the pace at which AI technologies are evolving. Systems that can convincingly mimic human personalities, carry on extended conversations and elicit emotional responses have emerged from research labs and into mainstream applications, blurring the line between tools and digital companions. This shift raises complex questions about autonomy, mental health and the nature of human-machine relationships questions that regulators are now attempting to address through frameworks like China’s draft guidelines


As the public consultation period unfolds, the input received will likely shape the final form of the regulations and provide insights into how different sectors perceive the benefits and risks of emotionally interactive AI. Supporters of tighter rules argue that they are necessary to prevent addiction, protect younger users and avoid manipulative applications, while critics may caution that overly prescriptive guidelines could stifle innovation or impose burdensome compliance costs on developers. The balance between fostering technological advancement and safeguarding public interests remains a central tension in the emerging field of AI governance.


China’s draft regulations on human-like AI interaction represent a bold and comprehensive approach to a pressing contemporary issue. By addressing emotional engagement, psychological risk and content safety, the draft not only highlights the unique challenges posed by next-generation AI but also indicates Beijing’s willingness to assert regulatory control over technologies that touch everyday life. As these rules progress from draft to potential policy, they may influence not only how AI is developed and deployed in China but also how other countries think about governing the next frontier of artificial intelligence.

Comments


bottom of page