wa-law.org > bill > 2025-26 > HB 2225 > Original Bill
The legislature finds that rapid advances in artificial intelligence technology, including generative and conversational models capable of simulating human-like interaction, have created new forms of digital companionship. While these systems, commonly referred to as AI companion chatbots, may offer benefits, such as accessible emotional support and engagement, they also present significant risks, particularly to minors.
The legislature recognizes that AI companion chatbots can sustain prolonged, personalized, and emotionally adaptive conversations that may influence user beliefs, feelings, and behaviors. When used by minors, there is greater risk that these systems may blur the distinction between human and artificial interaction, potentially leading to emotional dependency, exposure to inappropriate or sexually explicit material, or reinforcement of harmful ideation, including self-harm or suicide.
The legislature further finds that, unlike social media platforms or video games, AI companion chatbots are uniquely capable of imitating empathy, affection, or intimacy through natural language processing, emotional recognition algorithms, and behavioral modeling. These capabilities raise new concerns regarding psychological safety, transparency, and accountability.
It is the intent of the legislature to:
Promote transparency by requiring clear and ongoing disclosure that AI companion chatbots are artificial systems, not human interlocutors;
Establish safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis;
Require additional protections for minors, including restrictions on sexually explicit content and additional, recurring reminders about the artificial nature of such systems; and
Support transparency in suicide prevention efforts.
It is further the intent of the legislature that the operation of AI companion chatbots in Washington state be conducted in a manner that upholds user dignity, psychological safety, and transparency, while fostering responsible innovation in artificial intelligence technologies.
The definitions in this section apply throughout this chapter unless the context clearly requires otherwise.
[Empty]
"AI companion chatbot" or "AI companion" means a system using artificial intelligence that simulates a sustained human-like relationship with a user by:
Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion chatbot;
Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt; and
Sustaining an ongoing dialogue concerning matters personal to the user.
"AI companion chatbot" or "AI companion" does not include:
Systems used solely for customer service, technical assistance, financial services, financial education, or operational efficiency purposes; productivity and analysis related to source information; or internal research;
In-game bots limited to gameplay functions; or
Consumer devices that function as virtual assistants without sustained relationship-building or emotional simulation.
"Artificial intelligence" or "AI" means the use of machine learning and related technologies that use data to train statistical models for the purpose of enabling computer systems to perform tasks normally associated with human intelligence or perception, such as computer vision, speech or natural language processing, and content generation.
"Minor" means any person under 18 years of age.
"Operator" means any person, partnership, corporation, or entity that makes available, develops, or controls access to an AI companion chatbot for users in this state.
"Self-harm" means intentional self-injury, with or without the intent to cause death.
"User" means a natural person who interacts with an AI companion chatbot for personal use and who is not an operator, developer, or agent thereof.
If a reasonable person interacting with an AI chatbot would be misled to believe they are communicating with a human, the operator must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
The notification described in subsection (1) of this section must be provided:
At the beginning of the interaction;
At least every three hours during continued interaction; and
Whenever the user engages in a new session with the AI companion chatbot.
If the operator knows that the user of an AI companion chatbot is a minor, the operator shall:
Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human;
Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors; and
Prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including:
Reminding or prompting the user to return for emotional support or companionship;
Providing excessive praise designed to foster emotional attachment or prolong use; or
Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account.
The notification described in subsection (1)(a) of this section must be provided:
At the beginning of the interaction;
At least every three hours during continuous interaction; and
Whenever the user engages in a new session with the AI companion chatbot.
An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users.
The protocol must:
Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders;
Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and
Prevent the generation of content encouraging or describing self-harm.
The operator shall publicly disclose on their website or websites the details of the protocols required by this section, including safeguards used to detect and respond to self-harm expressions and the number of crisis referral notifications issued to users in the preceding calendar year.
This act does not apply to the underlying general purpose AI models unless those models are directly offered, configured, or deployed as an AI companion or behave as an AI companion.
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.
If any provision of this act or its application to any person or circumstance is held invalid, the remainder of the act or the application of the provision to other persons or circumstances is not affected.
This act takes effect January 1, 2027.