wa-law.org > bill > 2025-26 > SB 6284 > Original Bill

SB 6284 - AI consumer protections

Source

Section 1

The legislature finds that artificial intelligence is a dynamic technology that is changing the way Washingtonians live, learn, and work. Emerging artificial intelligence technologies seek to enhance information gathering and decision making, create new efficiencies throughout the economy, and advance scientific discovery. The legislature also recognizes that there are potential risks associated with the rapid development and deployment of artificial intelligence systems, including the generation of bias and unintentional discrimination. These risks are especially profound when an artificial intelligence system is used to make decisions that have a material or legal impact on someone's life.

The legislature recognizes Washington's role as a leader and innovator within the high-technology economy. The legislature intends to adopt an artificial intelligence regulatory framework that continues to promote innovation, reduce risk, and protect residents from discriminatory actions. The legislature further recognizes that regulatory frameworks that align with national standards and specify clear compliance requirements provide developers with certainty to support continued innovation while also protecting consumers from unfair or unclear artificial intelligence decisions.

Therefore, it is the intent of the legislature to protect Washingtonians from algorithmic discrimination by establishing a comprehensive risk-based approach to artificial intelligence accountability. The legislature intends to regulate deployers of artificial intelligence under the presumption that they are acting in good faith when they comply with the provisions of this act. The legislature also intends to ensure that government agencies are transparent and accountable by requiring disclosure when consumer interactions are supported by an artificial intelligence system. Finally, the legislature intends to extend and expand the work of the artificial intelligence task force to develop a framework for the adoption of artificial intelligence in the workplace in a way that centers workers and protects fairness and opportunity.

Section 2

The definitions in this section apply throughout this chapter unless the context clearly requires otherwise.

  1. "Algorithmic discrimination":

    1. Means the use of an artificial intelligence system which results in any unlawful differential impact that disfavors any individual or group of individuals on the basis of chapter 49.60 RCW or federal law; and

    2. Does not include the following:

      1. Any offer, license, or use of a high-risk artificial intelligence system by a developer or deployer for the sole purpose of: (A) The developer's or deployer's testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law; or (B) expanding an applicant, customer, or participant pool to increase diversity or redress historic discrimination; or

      2. Any act or omission by or on behalf of a private club or other establishment not in fact open to the public, as set forth in Title II of the Civil Rights Act of 1964, 42 U.S.C. Sec. 2000a(e), as amended.

  2. "Artificial intelligence" means the use of machine learning and related technologies that use data to train statistical models for the purpose of enabling computer systems to perform tasks normally associated with human intelligence or perception, such as computer vision, speech or natural language processing, and content generation.

  3. "Consequential decision" means any decision that has a material legal or similarly significant effect on the provision or denial of any consumer's access to:

    1. Pardon, parole, probation, or release;

    2. Education enrollment or opportunity;

    3. Employment;

    4. A financial or lending service;

    5. An essential government service;

    6. Health care services;

    7. Housing;

    8. Insurance; or

      1. Legal service.
  4. "Consumer" means any individual who is a resident of this state.

  5. "Deploys" or "deployed" means to put a high-risk artificial intelligence system into use.

  6. "Deployer" means any person doing business in this state that deploys a high-risk artificial intelligence system in the state.

  7. "Developer" means any person doing business in this state that develops, or intentionally and substantially modifies, a high-risk artificial intelligence system intended for use within the state.

  8. "High-risk artificial intelligence system":

    1. Means any artificial intelligence system designed by its developer to, when deployed, make, or is a substantial factor in making, a consequential decision; and

    2. Does not include:

      1. Any artificial intelligence system that is intended to:

(A) Perform any narrow procedural task;

(B) Improve the result of a previously completed human activity;

(C) Perform a preparatory task to an assessment relevant to a consequential decision; or

(D) Detect any decision-making pattern, or any deviation from any preexisting decision-making pattern;

    ii. Any antifraud technology, antimalware, antivirus, calculator, cybersecurity, database, data storage, firewall, internet domain registration, internet website loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, webcaching, webhosting, search engine, or similar technology; or

    iii. Any technology that communicates in natural language for the purpose of providing users with information, making referrals or recommendations, answering questions, or generating other content, and is subject to an acceptable use policy that prohibits generating content that is unlawful.
  1. "Intentional and substantial modification" or "intentionally and substantially modifies" means a deliberate change made to an artificial intelligence system that materially increases the risk of algorithmic discrimination.

  2. "Person" means any individual, association, corporation, limited liability company, partnership, trust, or other legal entity.

  3. "Substantial factor" means a factor that is:

    1. Considered when making a consequential decision;

    2. Likely to alter the outcome of a consequential decision; and

    3. Weighed more heavily by a deployer of the applicable high-risk artificial intelligence system than any other factor contributing to the consequential decision.

Section 3

  1. [Empty]

    1. Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.

    2. In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter.

  2. [Empty]

    1. By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.

    2. If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.

  3. Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.

Section 4

  1. Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system.

  2. [Empty]

    1. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system.

    2. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering:

      1. The size and complexity of the deployer;

      2. The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems;

      3. The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and

      4. A risk management framework that either:

(A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or

(B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate.

c. A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
  1. Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.

Section 5

  1. Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for:

    1. The high-risk artificial intelligence system; and

    2. A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available.

  2. Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer:

    1. A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system;

    2. An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks;

    3. A description of the following:

      1. The categories of data the high-risk artificial intelligence system processes as inputs;

      2. The outputs the high-risk artificial intelligence system produces;

      3. Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;

      4. A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and

    4. A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system.

  3. In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system.

  4. A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer.

  5. If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection.

  6. A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.

  7. Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.

Section 6

  1. The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed:

    1. The deployer:

      1. Employs fewer than 50 full-time equivalent employees; and

      2. Does not use the deployer's own data to train the high-risk artificial intelligence system;

    2. The high-risk artificial intelligence system:

      1. Is used for the intended uses that are disclosed by the deployer; and

      2. Continues learning based on data derived from sources other than the deployer's own data; and

    3. The deployer makes available to consumers any impact assessment that:

      1. The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and

      2. Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act.

  2. Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.

Section 7

Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:

  1. Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and

  2. Provide to the consumer a statement disclosing:

    1. The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions;

    2. The contact information for the deployer; and

    3. A description, in plain language, of the high-risk artificial intelligence system.

Section 8

  1. Nothing in this chapter may be construed to:

    1. Restrict a developer's, deployer's, or other person's ability to:

      1. Comply with federal, state, or municipal law;

      2. Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, municipal, or other governmental authorities;

      3. Cooperate with law enforcement agencies concerning conduct or activity that the developer, deployer, or other person reasonably and in good faith believes may violate federal, state, or municipal law;

      4. Investigate, establish, exercise, prepare for, or defend legal claims;

    2. Take immediate steps to protect an interest that is essential for the life or physical safety of a consumer or another individual;

    1. Engage in public or peer-reviewed scientific or statistical research in the public interest that adheres to all other applicable ethics and privacy laws and is conducted in accordance with 45 C.F.R. Part 46, as amended from time to time, or relevant requirements established by the federal food and drug administration;

    2. Conduct any research, testing, or development activities regarding any artificial intelligence system or model, other than testing conducted under real world conditions, before such artificial intelligence system or model is placed on the market, deployed, or put into service, as applicable;

    3. Effectuate a product recall;

     ix. Identify and repair technical errors that impair existing or intended functionality; or
    
    1. Assist another developer, deployer, or person with any of the obligations imposed under this chapter;

    2. Impose any obligation on a developer, deployer, or other person that adversely affects the rights or freedoms of any person including, but not limited to, the rights of any person to freedom of speech or freedom of the press guaranteed in the First Amendment to the United States Constitution;

    3. Apply to any developer, deployer, or other person:

      1. Insofar as such developer, deployer, or other person develops, deploys, puts into service, or intentionally and substantially modifies, as applicable, a high-risk artificial intelligence system that has been approved, authorized, certified, cleared, or granted:

(A) By a federal agency, such as the federal food and drug administration or the federal aviation administration, acting within the scope of such federal agency's authority; or

(B) In compliance with standards established by any federal agency including, but not limited to, standards established by the federal office of the national coordinator for health information technology;

    ii. Conducting any research to support an application for approval or certification from any federal agency including, but not limited to, the federal aviation administration, the federal communications commission, or the federal food and drug administration, or otherwise subject to review by such federal agency;

    iii. Performing work under, or in connection with, a contract with the United States department of commerce, the United States department of defense, or the national aeronautics and space administration, unless such developer, deployer, or other person is performing such work on a high-risk artificial intelligence system that is used to make, or as a substantial factor in making, a decision concerning employment or housing; or

    iv. That is a covered entity within the meaning of the health insurance portability and accountability act of 1996, P.L. 104-191, and the regulations promulgated thereunder, as both may be amended from time to time, and providing health care recommendations that:

(A) Are generated by an artificial intelligence system;

(B) Require a health care provider to take action to implement such recommendations; and

(C) Are not considered to be high risk; or

d. Apply to any artificial intelligence system that is acquired by or for the federal government or any federal agency or department including, but not limited to, the United States department of commerce, the United States department of defense, or the national aeronautics and space administration, unless such artificial intelligence system is a high-risk artificial intelligence system that is used to make, or as a substantial factor in making, a decision concerning employment or housing.
  1. If a developer, deployer, or other person engages in any action pursuant to an exemption set forth in this section, the developer, deployer, or other person bears the burden of demonstrating that such action qualifies for such exemption.

Section 9

  1. [Empty]

    1. The attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce this chapter. For actions brought by the attorney general to enforce this chapter, a violation of this chapter is an unfair or deceptive act in trade or commerce for the purpose of applying the consumer protection act, chapter 19.86 RCW. An action to enforce this chapter may not be brought under RCW 19.86.090.

    2. The office of the attorney general, before commencing an action under the consumer protection act, chapter 19.86 RCW, must provide 45 days' written notice to a deployer or developer of the alleged violation of this chapter. For the first violation, the developer or deployer may cure the noticed violation within 60 days of receiving the written notice.

  2. Nothing in this chapter may be construed to limit or otherwise affect the obligations of developers and deployers under applicable laws, rules, or regulations relating to data privacy or security.

Section 10

  1. A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be:

    1. Clear and conspicuously posted;

    2. Written in plain language; and

    3. May not use a dark pattern.

  2. The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page.

  3. A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

Section 13

(1) Subject to the availability of amounts appropriated for this specific purpose, a A task force to assess current uses and trends and make recommendations to the legislature regarding guidelines and potential legislation for the use of artificial intelligence systems is established.

Section 14

The artificial intelligence workplace advisory group is established for the purpose of developing artificial intelligence policy related to the workplace. The artificial intelligence workplace advisory group shall report to the artificial intelligence task force established in section 2, chapter 163, Laws of 2024 as prescribed in subsection (4) of this section.

  1. The attorney general shall appoint the following members to the artificial intelligence workplace advisory group:

    1. Two members representing different statewide labor organizations;

    2. Two members representing the business community;

    3. One member representing public sector employees;

    4. One member representing private sector employees;

    5. One member representing higher education institutions with expertise in artificial intelligence and the workforce;

    6. One member with expertise in ethics and artificial intelligence; and

    7. Other members as deemed appropriate by the attorney general.

  2. The artificial intelligence workplace advisory group is responsible for developing guiding principles for the use of artificial intelligence in the workplace. At a minimum, the guiding principles must prioritize the responsible and ethical use of artificial intelligence tools in ways that protect an individual's privacy and minimize the risk of bias in the workplace.

  3. The artificial intelligence workplace advisory group shall deliver an interim report on the development of guiding principles for artificial intelligence in the workplace to the artificial intelligence task force by December 1, 2026, and a final report by March 1, 2027. The final report must be included in the artificial intelligence task force's final report as required by section 2(5), chapter 163, Laws of 2024.

  4. This section expires June 30, 2028.


Created by @tannewt. Contribute on GitHub.