Image Not Found
Follow LEKMAK

PNG stories with context.
Culture, policy, and lived experience — beyond headlines.


  • Home
  • Technology
  • China acts on emotional AI — and exposes a regulatory gap many countries share
Image

China acts on emotional AI — and exposes a regulatory gap many countries share

China’s move highlights how emotionally interactive AI is outpacing regulation in Papua New Guinea.


When China published draft rules on 27 December 2025 aimed at regulating “humanised” artificial intelligence, it marked a turning point in how governments are beginning to treat machines designed to sound, feel and behave like people.

The draft measures, released by the Cyberspace Administration of China, target AI systems that simulate human personality traits, emotional responses and companionship. These include conversational chatbots, virtual partners and AI tools designed to provide emotional support over extended periods.


THIS IS WHAT YOU NEED TO KNOW
  • AI regulation is shifting globally — Governments are moving beyond data and privacy concerns to focus on emotional manipulation, dependency and mental health risks.
  • China is setting the pace — Its draft rules show how detailed and interventionist AI regulation is becoming, especially for human-like systems.
  • PNG is already exposed — AI tools are widely used locally, but without clear rules, safeguards or public awareness.
  • Policy choices matter early — Once AI systems shape behaviour and trust, regulating after harm emerges becomes far harder.

Under the proposal, companies would be required to clearly inform users that they are interacting with artificial intelligence, monitor signs of emotional dependency, and intervene if users show distress or addictive behaviour. If a user expresses suicidal or self-harm intentions, a human operator must take over the interaction. Services that simulate family members or close personal relationships for elderly users would be prohibited.

REGULATING AI THAT FEELS HUMAN

The measures go further than many existing AI laws. They mandate usage reminders after two hours, require easy exit mechanisms, impose strict safeguards for minors, and limit how user interaction data can be reused for training models. The focus is not simply on what AI systems say, but how they shape emotions, behaviour and trust.

While China’s regulatory style is distinctive, the concerns behind it are increasingly shared.

HOW OTHER COUNTRIES ARE RESPONDING

In Europe, the EU Artificial Intelligence Act takes a risk-based approach. AI systems that manipulate human behaviour or exploit vulnerabilities — particularly those of children, the elderly or people with disabilities — fall into the highest risk categories and may be banned outright. The legislation requires transparency when users interact with AI and mandates human oversight for high-risk applications.

The United States has pursued a more fragmented path. There is no comprehensive AI law, but regulators such as the Federal Trade Commission have warned companies against deploying AI in deceptive or emotionally exploitative ways. The White House’s AI Bill of Rights outlines principles around protection from abusive systems, clear disclosure and access to human alternatives, even though they remain voluntary.

Japan has leaned on soft-law instruments such as its AI Governance Guidelines, which emphasise human dignity, transparency and social responsibility. South Korea, drawing on its experience regulating gaming addiction, has begun examining how emotionally engaging AI could affect young users and vulnerable groups.

Across these jurisdictions, a common theme is emerging: as AI becomes more conversational and emotionally responsive, the risks move beyond misinformation and privacy into mental health, dependency and social harm.

THE PNG CONTEXT AND EMERGING RISKS

For Papua New Guinea, this shift raises questions that have yet to be seriously addressed.

AI systems are already accessible through smartphones and global platforms, and are being used for education, informal counselling, religious discussion, business advice and companionship. Yet there is no PNG-specific framework dealing with emotionally interactive AI, transparency obligations, dependency risks or safeguards for minors and vulnerable users.

PNG’s context adds complexity. The country has a very young population, limited access to mental health services, uneven digital literacy and strong reliance on social and kinship networks. In such an environment, AI systems that present themselves as trusted confidants or authority figures may be misunderstood, over-relied upon or misused.

There is also a cultural dimension. In societies where relationships are communal and intergenerational, machines simulating friendship, guidance or family roles challenge social norms in ways policymakers have barely begun to consider.

China’s draft rules may not be directly transferable, but they highlight a broader reality: regulation is moving from abstract ethics to concrete controls over how AI interacts with people.

As AI becomes more human-like, the absence of policy is no longer neutral.

1 Comments Text
  • Joyce3790 says:
    Your comment is awaiting moderation. This is a preview; your comment will be visible after it has been approved.
    Maximize your income with our high-converting offers—join as an affiliate!
  • Leave a Reply

    Your email address will not be published. Required fields are marked *