wa-law.org > bill > 2025-26 > HB 2225 > Engrossed Substitute

HB 2225 - AI companion chatbots

Source

Section 1

  1. The legislature finds that rapid advances in artificial intelligence technology, including generative and conversational models capable of simulating human-like interaction, have created new forms of digital companionship. While these systems, commonly referred to as AI companion chatbots, may offer benefits, such as accessible emotional support and engagement, they also present significant risks, particularly to minors.

  2. The legislature recognizes that AI companion chatbots can sustain prolonged, personalized, and emotionally adaptive conversations that may influence user beliefs, feelings, and behaviors. When used by minors, there is greater risk that these systems may blur the distinction between human and artificial interaction, potentially leading to emotional dependency, exposure to inappropriate or sexually explicit material, or reinforcement of harmful ideation, including self-harm or suicide.

  3. The legislature further finds that, unlike social media platforms or video games, AI companion chatbots are uniquely capable of imitating empathy, affection, or intimacy through natural language processing, emotional recognition algorithms, and behavioral modeling. These capabilities raise new concerns regarding psychological safety, transparency, and accountability.

  4. It is the intent of the legislature to:

    1. Promote transparency by requiring clear and ongoing disclosure that AI companion chatbots are artificial systems, not human interlocutors;

    2. Establish safeguards to detect and respond to user expressions of harm, suicidal ideation, or emotional crisis;

    3. Require additional protections for minors, including restrictions on sexually explicit content and additional, recurring reminders about the artificial nature of such systems; and

    4. Support transparency in suicide prevention efforts.

  5. It is further the intent of the legislature that the operation of AI companion chatbots in Washington state be conducted in a manner that upholds user dignity, psychological safety, and transparency, while fostering responsible innovation in artificial intelligence technologies.

Section 2

The definitions in this section apply throughout this chapter unless the context clearly requires otherwise.

  1. [Empty]

    1. "AI companion chatbot" or "AI companion" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions.

    2. "AI companion chatbot" or "AI companion" does not include any of the following:

      1. A bot that is used only for a business' operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user;

      2. A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game; or

      3. A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.

  2. "Artificial intelligence" or "AI" means the use of machine learning and related technologies that use data to train statistical models for the purpose of enabling computer systems to perform tasks normally associated with human intelligence or perception, such as computer vision, speech or natural language processing, and content generation.

  3. "Minor" means any person under 18 years of age.

  4. "Operator" means any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for users in this state, excluding those used specifically for educational purposes and educational entities.

  5. "Self-harm" means intentional self-injury, with or without the intent to cause death.

  6. "User" means a natural person who interacts with an AI companion chatbot for personal use and who is not an operator, developer, or agent thereof.

Section 3

  1. An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human.

  2. The notification described in subsection (1) of this section must be provided:

    1. At the beginning of the interaction; and

    2. At least every three hours during continued interaction.

  3. The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.

Section 4

  1. If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall:

    1. Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human;

    2. Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors; and

    3. Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including:

      1. Reminding or prompting the user to return for emotional support or companionship;

      2. Providing excessive praise designed to foster emotional attachment or prolong use;

      3. Mimicking romantic partnership or building romantic bonds;

      4. Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account;

    4. Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence;

    1. Encouraging minors to withhold information from parents or other trusted adults;

    2. Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or

    3. Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.

  2. The notification described in subsection (1)(a) of this section must be provided:

    1. At the beginning of the interaction; and

    2. At least every hour during continuous interaction.

  3. The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.

Section 5

  1. An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of harm by users.

  2. The protocol must:

    1. Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders;

    2. Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and

    3. Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.

  3. The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.

Section 6

The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.

Section 8

If any provision of this act or its application to any person or circumstance is held invalid, the remainder of the act or the application of the provision to other persons or circumstances is not affected.

Section 9

This act takes effect January 1, 2027.


Created by @tannewt. Contribute on GitHub.