TL;DR

China’s Cyberspace Administration published draft rules for ‘humanized’ AI companions that would bar systems from simulating specific relatives for elderly users and impose safety, data and mental-health safeguards. The draft also requires periodic reminders that the companion is not human, parental controls, and bans using interaction data to train models; public feedback is open until January 25.

What happened

China’s Cyberspace Administration released a draft titled “Interim Measures for the Administration of Humanized Interactive Services Based on Artificial Intelligence” that lays out limits and obligations for AI companion services. The draft calls on providers to secure systems, prevent fraud, encrypt data, reflect core socialist values, and implement parental controls and protections for minors. For elderly users, the rules would require providers to help set emergency contacts, notify them if a user faces threats to life, health or property, and offer social or psychological assistance or emergency-relief channels. The proposal would also forbid offering services that simulate a specific relative or relationship for elderly users. Other requirements include reminding users every two hours that they are interacting with an AI, giving advance notice of outages, avoiding designs that intentionally replace social interaction or induce addiction, and prohibiting use of interaction data to train models. The Cyberspace Administration is seeking comments through January 25.

Why it matters

  • Designers of companion AI could be forced to remove features that impersonate family members, reshaping eldercare applications.
  • A ban on using interaction data for model training would affect how developers collect and improve conversational AI.
  • Mandates like periodic reminders, parental controls and emergency-contact workflows raise technical and operational compliance requirements for providers.
  • The rules link AI behaviour to ideological and public-safety goals, signaling regulatory scrutiny beyond purely technical issues.

Key facts

  • The draft is titled “Interim Measures for the Administration of Humanized Interactive Services Based on Artificial Intelligence.”
  • It was posted by China’s Cyberspace Administration and opened for public comment.
  • Providers must guide elderly users to set emergency contact persons and notify them if users face life, health or property risks.
  • The draft prohibits AI services that simulate relatives or specific relationships of elderly users.
  • Systems must remind users they are not interacting with a human at least every two hours.
  • Providers are required to implement parental controls and protect data related to minors.
  • The draft calls for encryption, fraud prevention, and that services reflect core socialist values.
  • It includes requirements for mental-health protections, emotional boundary guidance and dependency-risk warnings.
  • Using data gathered during interactions to train AI models is disallowed under the draft.
  • The Cyberspace Administration set a deadline for feedback of January 25.

What to watch next

  • Public comment period and any revisions to the draft ahead of finalisation (public feedback deadline: January 25) — confirmed in the source
  • How companies will technically prevent companions from simulating specific relatives and how enforcement will be carried out — not confirmed in the source
  • Whether the ban on using interaction data for training will be modified, clarified or extended to other AI use cases — not confirmed in the source

Quick glossary

  • AI companion: A conversational or interactive artificial intelligence system designed to engage users in social or emotional interactions.
  • Cyberspace Administration of China (CAC): China’s government agency responsible for internet policy, regulation and oversight.
  • Parental controls: Features that restrict or monitor access to content or functions to protect minors and enforce safety rules.
  • Model training data: Data used to train machine-learning systems so they learn patterns and generate outputs; some rules restrict what data can be used.

Reader FAQ

Does the draft ban AI from simulating relatives for elderly users?
Yes. The draft explicitly prohibits providers from offering services that simulate relatives or specific relationships of elderly users.

Are providers allowed to use interaction data to train their AI models?
No. The draft states providers should not use data gathered during interactions to train models.

Will users be told they are talking to an AI?
The proposal requires AI companions to remind users they are not interacting with a human at least every two hours.

When does the comment period close?
The Cyberspace Administration is seeking feedback through January 25.

AI + ML China wants to ban making yourself into an AI to keep aged relatives company PLUS: Australia buys air-gapped Google Cloud; Huawei triples use of home-built components; JAXA…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *