wa-law.org > bill > 2025-26 > HB 2225 > Substitute Bill
The legislature finds that rapid advances in artificial intelligence technology, including generative and conversational models capable of simulating human-like interaction, have created new forms of digital companionship. While these systems, commonly referred to as AI companion chatbots, may offer benefits, such as accessible emotional support and engagement, they also present significant risks, particularly to minors.
The legislature recognizes that AI companion chatbots can sustain prolonged, personalized, and emotionally adaptive conversations that may influence user beliefs, feelings, and behaviors. When used by minors, there is greater risk that these systems may blur the distinction between human and artificial interaction, potentially leading to emotional dependency, exposure to inappropriate or sexually explicit material, or reinforcement of harmful ideation, including self-harm or suicide.
The legislature further finds that, unlike social media platforms or video games, AI companion chatbots are uniquely capable of imitating empathy, affection, or intimacy through natural language processing, emotional recognition algorithms, and behavioral modeling. These capabilities raise new concerns regarding psychological safety, transparency, and accountability.
It is the intent of the legislature to:
Promote transparency by requiring clear and ongoing disclosure that AI companion chatbots are artificial systems, not human interlocutors;
Establish safeguards to detect and respond to user expressions of harm, suicidal ideation, or emotional crisis;
Require additional protections for minors, including restrictions on sexually explicit content and additional, recurring reminders about the artificial nature of such systems; and
Support transparency in suicide prevention efforts.
It is further the intent of the legislature that the operation of AI companion chatbots in Washington state be conducted in a manner that upholds user dignity, psychological safety, and transparency, while fostering responsible innovation in artificial intelligence technologies.
The definitions in this section apply throughout this chapter unless the context clearly requires otherwise.
[Empty]
"AI companion chatbot" or "AI companion" means a system using artificial intelligence that simulates a sustained human-like relationship with a user by including all of the following actions:
Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion chatbot;
Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt; and
Sustaining an ongoing dialogue concerning matters personal to the user.
"AI companion chatbot" or "AI companion" does not include:
Systems that do not create sustained relationship-building or emotional simulation and are used: Solely for customer service, technical assistance, financial services, financial education, or operational efficiency purposes; productivity and analysis related to source information; or internal research;
In-game bots limited to gameplay functions; or
Consumer devices that function as virtual assistants or narrowly focused educational tools without sustained relationship-building or emotional simulation.
"Artificial intelligence" or "AI" means the use of machine learning and related technologies that use data to train statistical models for the purpose of enabling computer systems to perform tasks normally associated with human intelligence or perception, such as computer vision, speech or natural language processing, and content generation.
"Knows" includes all information and inferences known to an operator relating to the age of an individual via any source, including the age provided by the user in connection with the account, self-identified age in any chat or interaction to which the operator possesses a right of access or use, and any age the operator attributes or associates with the user for any purpose, including marketing, advertising, or product development. Nothing in this subsection may be interpreted to require an operator to begin accessing or collecting any user information or data to which they do not have access or otherwise collect for purposes unrelated to this chapter.
"Minor" means any person under 18 years of age.
"Operator" means any person, partnership, corporation, or entity that makes available, develops, or controls access to an AI companion chatbot for users in this state, excluding those used specifically for educational purposes and educational entities.
"Self-harm" means intentional self-injury, with or without the intent to cause death.
"User" means a natural person who interacts with an AI companion chatbot for personal use and who is not an operator, developer, or agent thereof.
An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human.
If a person interacting with an AI companion chatbot is seeking mental or other medical advice, the operator must issue a clear and conspicuous disclosure that the AI companion chatbot is not a health care professional and should not be used for mental or physical medical advice.
The notifications described in subsections (1) and (2) of this section must be provided:
At the beginning of the interaction;
At least every three hours during continued interaction; and
Whenever the user engages in a new session with the AI companion chatbot.
The operator must prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall:
Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human;
Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors; and
Prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including:
Reminding or prompting the user to return for emotional support or companionship;
Providing excessive praise designed to foster emotional attachment or prolong use;
Mimicking romantic partnership or building romantic bonds; or
Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account.
The notification described in subsection (1)(a) of this section must be provided:
At the beginning of the interaction;
At least every hour during continuous interaction; and
Whenever the user engages in a new session with the AI companion chatbot.
The operator must prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of harm by users.
The protocol must:
Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders;
Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and
Prevent the generation of content encouraging or describing self-harm.
The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm. The operator shall annually report to the office of the attorney general the number of crisis referral notifications issued to users in the preceding calendar year.
This act does not apply to the underlying general purpose AI models unless those models are directly offered, configured, or deployed as an AI companion or behave as an AI companion.
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.
If any provision of this act or its application to any person or circumstance is held invalid, the remainder of the act or the application of the provision to other persons or circumstances is not affected.
This act takes effect January 1, 2027.