Amended IN Senate March 26, 2025 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION Senate Bill No. 420Introduced by Senator PadillaFebruary 18, 2025 An act to add Chapter 24.6 (commencing with Section 22756) to Division 8 of the Business and Professions Code, and to add Article 11 (commencing with Section 10285.8) to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, relating to artificial intelligence.LEGISLATIVE COUNSEL'S DIGESTSB 420, as amended, Padilla. Individual rights. Automated decision systems.The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence system to make available an AI detection tool at no cost to the user that meets certain criteria, including that the tool outputs any system provenance data, as defined, that is detected in the content. The California Consumer Privacy Act of 2018 grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumers personal information, as specified.This bill would express the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting certain rights and values related to artificial intelligence.This bill would generally regulate a developer or a deployer of a high-risk automated decision system, as defined, including by requiring a developer or a deployer to perform an impact assessment on the high-risk automated decision system before making it publicly available or deploying it, as prescribed. The bill would require a state agency to require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment and would require the state agency to keep that impact assessment confidential. The bill would also require a developer to provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment and would require the impact assessment to be kept confidential.This bill would authorize the Attorney General or the Civil Rights Department to bring a specified civil action to enforce compliance with the bill, as prescribed, and would authorize a developer or deployer to cure, within 45 days of receiving a certain notice of a violation, the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program.This bill would prohibit a state agency from awarding a contract for a high-risk automated decision system to a person who has violated, among other civil rights laws, the bill.Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest.This bill would make legislative findings to that effect.The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.This bill would provide that no reimbursement is required by this act for a specified reason.Digest Key Vote: MAJORITY Appropriation: NO Fiscal Committee: NOYES Local Program: NOYES Bill TextThe people of the State of California do enact as follows:SECTION 1. The Legislature finds and declares all of the following:(a) (1) Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety.(2) It is critical to protect individuals rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes.(3) A comprehensive set of rights must be established to ensure artificial intelligence technologies align with the public interest and reflect the values of California residents.(b) (1) Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make. (2) An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the systems logic, processing methods, and intended outcomes in a manner that is understandable.(c) (1) All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws.(2) Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty.(3) Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected.(d) (1) Artificial intelligence systems should not discriminate against individuals based on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law. (2) Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment.(e) (1) Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy.(2) An individual or group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms.(f) (1) Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice.(2) Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals.SEC. 2.It is the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting the rights and values described in Section 1 of this act.SEC. 2. Chapter 24.6 (commencing with Section 22756) is added to Division 8 of the Business and Professions Code, to read: CHAPTER 24.6. Automated Decision Systems22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter.SEC. 3. Article 11 (commencing with Section 10285.8) is added to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, to read: Article 11. High-Risk Automated Decision Systems10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code.SEC. 4. The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) of the Business and Professions Code, imposes a limitation on the publics right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:To avoid unduly disrupting commerce, it is necessary that trade secrets be protected.SEC. 5. No reimbursement is required by this act pursuant to Section 6 of Article XIIIB of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIIIB of the California Constitution. Amended IN Senate March 26, 2025 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION Senate Bill No. 420Introduced by Senator PadillaFebruary 18, 2025 An act to add Chapter 24.6 (commencing with Section 22756) to Division 8 of the Business and Professions Code, and to add Article 11 (commencing with Section 10285.8) to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, relating to artificial intelligence.LEGISLATIVE COUNSEL'S DIGESTSB 420, as amended, Padilla. Individual rights. Automated decision systems.The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence system to make available an AI detection tool at no cost to the user that meets certain criteria, including that the tool outputs any system provenance data, as defined, that is detected in the content. The California Consumer Privacy Act of 2018 grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumers personal information, as specified.This bill would express the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting certain rights and values related to artificial intelligence.This bill would generally regulate a developer or a deployer of a high-risk automated decision system, as defined, including by requiring a developer or a deployer to perform an impact assessment on the high-risk automated decision system before making it publicly available or deploying it, as prescribed. The bill would require a state agency to require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment and would require the state agency to keep that impact assessment confidential. The bill would also require a developer to provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment and would require the impact assessment to be kept confidential.This bill would authorize the Attorney General or the Civil Rights Department to bring a specified civil action to enforce compliance with the bill, as prescribed, and would authorize a developer or deployer to cure, within 45 days of receiving a certain notice of a violation, the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program.This bill would prohibit a state agency from awarding a contract for a high-risk automated decision system to a person who has violated, among other civil rights laws, the bill.Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest.This bill would make legislative findings to that effect.The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.This bill would provide that no reimbursement is required by this act for a specified reason.Digest Key Vote: MAJORITY Appropriation: NO Fiscal Committee: NOYES Local Program: NOYES Amended IN Senate March 26, 2025 Amended IN Senate March 26, 2025 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION Senate Bill No. 420 Introduced by Senator PadillaFebruary 18, 2025 Introduced by Senator Padilla February 18, 2025 An act to add Chapter 24.6 (commencing with Section 22756) to Division 8 of the Business and Professions Code, and to add Article 11 (commencing with Section 10285.8) to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, relating to artificial intelligence. LEGISLATIVE COUNSEL'S DIGEST ## LEGISLATIVE COUNSEL'S DIGEST SB 420, as amended, Padilla. Individual rights. Automated decision systems. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence system to make available an AI detection tool at no cost to the user that meets certain criteria, including that the tool outputs any system provenance data, as defined, that is detected in the content. The California Consumer Privacy Act of 2018 grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumers personal information, as specified.This bill would express the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting certain rights and values related to artificial intelligence.This bill would generally regulate a developer or a deployer of a high-risk automated decision system, as defined, including by requiring a developer or a deployer to perform an impact assessment on the high-risk automated decision system before making it publicly available or deploying it, as prescribed. The bill would require a state agency to require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment and would require the state agency to keep that impact assessment confidential. The bill would also require a developer to provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment and would require the impact assessment to be kept confidential.This bill would authorize the Attorney General or the Civil Rights Department to bring a specified civil action to enforce compliance with the bill, as prescribed, and would authorize a developer or deployer to cure, within 45 days of receiving a certain notice of a violation, the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program.This bill would prohibit a state agency from awarding a contract for a high-risk automated decision system to a person who has violated, among other civil rights laws, the bill.Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest.This bill would make legislative findings to that effect.The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.This bill would provide that no reimbursement is required by this act for a specified reason. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence system to make available an AI detection tool at no cost to the user that meets certain criteria, including that the tool outputs any system provenance data, as defined, that is detected in the content. The California Consumer Privacy Act of 2018 grants a consumer various rights with respect to personal information that is collected or sold by a business, as defined, including the right to direct a business that sells or shares personal information about the consumer to third parties not to sell or share the consumers personal information, as specified. This bill would express the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting certain rights and values related to artificial intelligence. This bill would generally regulate a developer or a deployer of a high-risk automated decision system, as defined, including by requiring a developer or a deployer to perform an impact assessment on the high-risk automated decision system before making it publicly available or deploying it, as prescribed. The bill would require a state agency to require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment and would require the state agency to keep that impact assessment confidential. The bill would also require a developer to provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment and would require the impact assessment to be kept confidential. This bill would authorize the Attorney General or the Civil Rights Department to bring a specified civil action to enforce compliance with the bill, as prescribed, and would authorize a developer or deployer to cure, within 45 days of receiving a certain notice of a violation, the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program. This bill would prohibit a state agency from awarding a contract for a high-risk automated decision system to a person who has violated, among other civil rights laws, the bill. Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest. This bill would make legislative findings to that effect. The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement. This bill would provide that no reimbursement is required by this act for a specified reason. ## Digest Key ## Bill Text The people of the State of California do enact as follows:SECTION 1. The Legislature finds and declares all of the following:(a) (1) Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety.(2) It is critical to protect individuals rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes.(3) A comprehensive set of rights must be established to ensure artificial intelligence technologies align with the public interest and reflect the values of California residents.(b) (1) Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make. (2) An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the systems logic, processing methods, and intended outcomes in a manner that is understandable.(c) (1) All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws.(2) Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty.(3) Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected.(d) (1) Artificial intelligence systems should not discriminate against individuals based on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law. (2) Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment.(e) (1) Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy.(2) An individual or group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms.(f) (1) Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice.(2) Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals.SEC. 2.It is the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting the rights and values described in Section 1 of this act.SEC. 2. Chapter 24.6 (commencing with Section 22756) is added to Division 8 of the Business and Professions Code, to read: CHAPTER 24.6. Automated Decision Systems22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter.SEC. 3. Article 11 (commencing with Section 10285.8) is added to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, to read: Article 11. High-Risk Automated Decision Systems10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code.SEC. 4. The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) of the Business and Professions Code, imposes a limitation on the publics right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:To avoid unduly disrupting commerce, it is necessary that trade secrets be protected.SEC. 5. No reimbursement is required by this act pursuant to Section 6 of Article XIIIB of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIIIB of the California Constitution. The people of the State of California do enact as follows: ## The people of the State of California do enact as follows: SECTION 1. The Legislature finds and declares all of the following:(a) (1) Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety.(2) It is critical to protect individuals rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes.(3) A comprehensive set of rights must be established to ensure artificial intelligence technologies align with the public interest and reflect the values of California residents.(b) (1) Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make. (2) An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the systems logic, processing methods, and intended outcomes in a manner that is understandable.(c) (1) All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws.(2) Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty.(3) Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected.(d) (1) Artificial intelligence systems should not discriminate against individuals based on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law. (2) Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment.(e) (1) Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy.(2) An individual or group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms.(f) (1) Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice.(2) Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals. SECTION 1. The Legislature finds and declares all of the following:(a) (1) Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety.(2) It is critical to protect individuals rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes.(3) A comprehensive set of rights must be established to ensure artificial intelligence technologies align with the public interest and reflect the values of California residents.(b) (1) Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make. (2) An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the systems logic, processing methods, and intended outcomes in a manner that is understandable.(c) (1) All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws.(2) Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty.(3) Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected.(d) (1) Artificial intelligence systems should not discriminate against individuals based on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law. (2) Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment.(e) (1) Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy.(2) An individual or group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms.(f) (1) Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice.(2) Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals. SECTION 1. The Legislature finds and declares all of the following: ### SECTION 1. (a) (1) Artificial intelligence technologies are becoming an integral part of daily life in California and have profound implications for privacy, equity, fairness, and public safety. (2) It is critical to protect individuals rights to safeguard against potential harms, including discrimination, privacy violations, and unchecked automation in critical decisionmaking processes. (3) A comprehensive set of rights must be established to ensure artificial intelligence technologies align with the public interest and reflect the values of California residents. (b) (1) Individuals should have the right to receive a clear and accessible explanation about how artificial intelligence systems operate, including the data they use and the decisions they make. (2) An entity that uses artificial intelligence systems to make decisions impacting California residents should provide a mechanism to inform individuals of the systems logic, processing methods, and intended outcomes in a manner that is understandable. (c) (1) All individuals have the right to control their personal data in relation to artificial intelligence systems. Artificial intelligence systems should operate with the highest standards of data privacy and security, in line with the California Consumer Privacy Act of 2018 and other relevant privacy laws. (2) Before personal data is used in artificial intelligence systems, entities should obtain informed, explicit consent from individuals, and individuals should have the right to withdraw consent at any time without penalty. (3) Entities should ensure that personal data used by artificial intelligence systems is anonymized or pseudonymized if feasible, and data retention should be limited to the purposes for which the data was initially collected. (d) (1) Artificial intelligence systems should not discriminate against individuals based on race, gender, sexual orientation, disability, religion, socioeconomic status, or other protected characteristics under California law. (2) Entities deploying artificial intelligence technologies should perform regular audits to identify and address any biases or inequities in their artificial intelligence systems and should ensure that artificial intelligence systems are designed and trained to promote fairness and equal treatment. (e) (1) Individuals should have the right to hold entities accountable for any harm caused by artificial intelligence systems, and entities should be liable for the actions and decisions made by artificial intelligence technologies they deploy. (2) An individual or group adversely affected by artificial intelligence-driven decisions should have access to a straightforward and transparent process for seeking redress, including the ability to challenge those decisions through human review and appeal mechanisms. (f) (1) Individuals should have the right to request human oversight for significant decisions made by artificial intelligence systems that impact them, particularly in areas such as employment, health care, housing, education, and criminal justice. (2) Artificial intelligence systems in high-stakes decisionmaking contexts should involve human review or intervention before final decisions, ensuring that automated decisions align with human values and public policy goals. It is the intent of the Legislature to enact legislation that would relate to strengthening, establishing, and promoting the rights and values described in Section 1 of this act. SEC. 2. Chapter 24.6 (commencing with Section 22756) is added to Division 8 of the Business and Professions Code, to read: CHAPTER 24.6. Automated Decision Systems22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter. SEC. 2. Chapter 24.6 (commencing with Section 22756) is added to Division 8 of the Business and Professions Code, to read: ### SEC. 2. CHAPTER 24.6. Automated Decision Systems22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter. CHAPTER 24.6. Automated Decision Systems22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining.22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential.22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code.22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation.22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter. CHAPTER 24.6. Automated Decision Systems CHAPTER 24.6. Automated Decision Systems 22756. As used in this chapter:(a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification.(b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state.(e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions.(f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state.(g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity.(h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment.(i) Health care means health care services or insurance for health, mental health, dental, or vision.(j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following:(A) Education enrollment or opportunity.(B) Employment or employment opportunity.(C) Essential utilities.(D) Temporary, short-term, or long-term housing.(E) Health care services.(F) Lending services.(G) A legal right or service.(H) An essential government service.(2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment.(k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions.(l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes.(m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code).(o) (1) State agency means any of the following:(A) A state office, department, division, or bureau.(B) The California State University.(C) The Board of Parole Hearings.(D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs.(2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1).(p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining. 22756. As used in this chapter: (a) Algorithmic discrimination means the condition in which an automated decision system contributes to unlawful discrimination on the basis of a protected classification. (b) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments. (c) (1) Automated decision system means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. (2) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data. (d) Deployer means a natural person or entity that uses a high-risk automated decision system in the state. (e) Detecting decisionmaking patterns without influencing outcomes means the act of artificial intelligence analyzing patterns for informational purposes without direct influence on decisions. (f) Developer means a natural person or entity that designs, codes, produces, or substantially modifies a high-risk automated decision system for use in the state. (g) Education enrollment or opportunity means the chance to obtain admission, accreditation, evaluation, certification, vocational training, financial aid, or scholarships with respect to an educational opportunity. (h) Employment or employment opportunity means hiring, salary, wage, or other material term, condition, or privilege of an employees employment. (i) Health care means health care services or insurance for health, mental health, dental, or vision. (j) (1) High-risk automated decision system means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, any of the following: (A) Education enrollment or opportunity. (B) Employment or employment opportunity. (C) Essential utilities. (D) Temporary, short-term, or long-term housing. (E) Health care services. (F) Lending services. (G) A legal right or service. (H) An essential government service. (2) High-risk automated decision system does not include an automated decision system that only performs narrow procedural tasks, enhances human activities, detects patterns without influencing decisions, or assists in preparatory tasks for assessment. (k) Improving results of previously completed human activities means the act of artificial intelligence enhancing existing human-performed tasks without altering decisions. (l) Narrow procedural task means a limited, procedural task that has a minimal impact on outcomes. (m) Preparatory task for assessment means a task in which an artificial intelligence aids in a preparatory task for assessment or evaluation without direct decisionmaking authority. (n) Protected classification means a classification protected under existing law prohibiting discrimination, including, but not limited to, the Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code) or the Unruh Civil Rights Act (Section 51 of the Civil Code). (o) (1) State agency means any of the following: (A) A state office, department, division, or bureau. (B) The California State University. (C) The Board of Parole Hearings. (D) A board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs. (2) State agency does not include the University of California, the Legislature, the judicial branch, or a board that is not described in paragraph (1). (p) Substantial modification means a new version, release, or other significant update that materially changes the functionality or performance of a high-risk automated decision system, including the results of retraining. 22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use.(2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system.(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system.(2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met:(A) The state agency does not make a substantial modification to the high-risk automated decision system.(B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d).(C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination.(D) The state agency is in compliance with Section 22756.3.(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2).(2) An impact assessment prepared pursuant to this section shall include all of the following:(A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts.(B) A description of the high-risk automated decision systems intended outputs.(C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system.(D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system.(E) A developers impact assessment shall also include both of the following:(i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system.(ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.(F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts.(G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system.(H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.(d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section.(2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential. 22756.1. (a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (1) A developer shall make available to deployers and potential deployers the statements included in the developers impact assessment pursuant to paragraph (2). (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision systems intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developers impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer. (F) A statement of the extent to which the deployers use of the high-risk automated decision system is consistent with, or varies from, the developers statement of the high-risk automated decision systems purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated. (d) (1) A state agency shall require a developer of a high-risk automated decision system deployed by the state agency to provide to the state agency a copy of the impact assessment conducted pursuant to this section. (2) Notwithstanding any other law, an impact assessment provided to a state agency pursuant to this subdivision shall be kept confidential. 22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following:(1) The purpose of the high-risk automated decision system and the specific decision it was used to make.(2) How the high-risk automated decision system was used to make the decision.(3) The type of data used by the high-risk automated decision system.(4) Contact information for the deployer.(5) A link to the statement required by subdivision (b).(b) A deployer shall make available on its internet website a statement summarizing all of the following:(1) The types of high-risk automated decision systems it currently deploys.(2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems.(3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person. 22756.2. (a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b). (b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer. (c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person. 22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system.(b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following:(1) The use, or intended use, of the high-risk automated decision system.(2) The size, complexity, and resources of the deployer or developer.(3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system.(4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system. 22756.3. (a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system. 22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code. 22756.4. A developer or deployer is not required to disclose information under this chapter if the disclosure of that information would result in the waiver of a legal privilege or the disclosure of a trade secret, as defined in Section 3426.1 of the Civil Code. 22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination.(b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur. 22756.5. (a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur. 22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter.(2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.(b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief:(1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees.(B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant.(2) Injunctive relief.(3) Reasonable attorneys fees and costs.(4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation.(c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter.(2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured.(B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation. 22756.6. (a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential. (b) The Attorney General or the Civil Rights Department may bring a civil action against a deployer or developer for a violation of this chapter and obtain any of the following relief: (1) (A) If a developer or deployer fails to conduct an impact assessment as required under this chapter, a civil penalty of two thousand five hundred dollars ($2,500) for a defendant with fewer than 100 employees, five thousand dollars ($5,000) if the defendant has fewer than 500 employees, and ten thousand dollars ($10,000) if the defendant has at least 500 employees. (B) If a violation is intentional, the civil penalty pursuant to this paragraph shall increase by five hundred dollars ($500) for each day that the defendant is noncompliant. (2) Injunctive relief. (3) Reasonable attorneys fees and costs. (4) If the violation concerns algorithmic discrimination, a civil penalty of twenty-five thousand dollars ($25,000) per violation. (c) (1) Before commencing an action pursuant to this section, the Attorney General or the Civil Rights Department shall provide 45 days written notice to a deployer or developer of any alleged violation of this chapter. (2) (A) The developer or deployer may cure, within 45 days of receiving the written notice described in paragraph (1), the noticed violation and provide an express written statement, made under penalty of perjury, that the violation has been cured. (B) If the developer or deployer cures the noticed violation and provides the express written statement pursuant to subparagraph (A), an action shall not be maintained for the noticed violation. 22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees.(b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter. 22756.7. This chapter does not apply to either of the following: (a) An entity with 50 or fewer employees. (b) A high-risk automated decision system that has been approved, certified, or cleared by a federal agency that complies with another law that is substantially the same or more stringent than this chapter. SEC. 3. Article 11 (commencing with Section 10285.8) is added to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, to read: Article 11. High-Risk Automated Decision Systems10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code. SEC. 3. Article 11 (commencing with Section 10285.8) is added to Chapter 1 of Part 2 of Division 2 of the Public Contract Code, to read: ### SEC. 3. Article 11. High-Risk Automated Decision Systems10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code. Article 11. High-Risk Automated Decision Systems10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code. Article 11. High-Risk Automated Decision Systems Article 11. High-Risk Automated Decision Systems 10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following:(1) The Unruh Civil Rights Act (Section 51 of the Civil Code).(2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code).(3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code.(b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code. 10285.8. (a) A state agency shall not award a contract for a high-risk automated decision system to a person who has violated any of the following: (1) The Unruh Civil Rights Act (Section 51 of the Civil Code). (2) The California Fair Employment and Housing Act (Chapter 7 (commencing with Section 12960) of Part 2.8 of Division 3 of Title 2 of the Government Code). (3) Chapter 24.6 (commencing with Section 22756) of Division 8 of the Business and Professions Code. (b) As used in this section, high-risk automated decision system has the same meaning as defined in Section 22756 of the Business and Professions Code. SEC. 4. The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) of the Business and Professions Code, imposes a limitation on the publics right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:To avoid unduly disrupting commerce, it is necessary that trade secrets be protected. SEC. 4. The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) of the Business and Professions Code, imposes a limitation on the publics right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:To avoid unduly disrupting commerce, it is necessary that trade secrets be protected. SEC. 4. The Legislature finds and declares that Section 2 of this act, which adds Chapter 24.6 (commencing with Section 22756) of the Business and Professions Code, imposes a limitation on the publics right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest: ### SEC. 4. To avoid unduly disrupting commerce, it is necessary that trade secrets be protected. SEC. 5. No reimbursement is required by this act pursuant to Section 6 of Article XIIIB of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIIIB of the California Constitution. SEC. 5. No reimbursement is required by this act pursuant to Section 6 of Article XIIIB of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIIIB of the California Constitution. SEC. 5. No reimbursement is required by this act pursuant to Section 6 of Article XIIIB of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIIIB of the California Constitution. ### SEC. 5.