California 2023 2023-2024 Regular Session

California Senate Bill SB892 Amended / Bill

Filed 06/21/2024

                    Amended IN  Assembly  June 21, 2024 Amended IN  Senate  April 10, 2024 Amended IN  Senate  April 01, 2024 CALIFORNIA LEGISLATURE 20232024 REGULAR SESSION Senate Bill No. 892Introduced by Senator Padilla(Coauthors: Senators Rubio and Smallwood-Cuevas)January 03, 2024An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.LEGISLATIVE COUNSEL'S DIGESTSB 892, as amended, Padilla. Public contracts: automated decision systems: AI risk management standards.Existing law requires all contracts for the acquisition of information technology goods and services related to information technology projects, as defined, to be made by or under the supervision of the Department of Technology. Existing law requires all other contracts for the acquisition of information technology goods or services to be made by or under the supervision of the Department of General Services. Under existing law, both the Department of Technology and the Department of General Services are authorized to delegate their authority to another agency, as specified.This bill would require the Department of Technology to develop and adopt regulations to create an artificial intelligence (AI) risk management standard, consistent with specified publications regarding AI risk management, and in accordance with the rulemaking provisions of the Administrative Procedure Act. as specified. To develop those regulations, the bill would authorize the department to apply principles and industry standards addressed in specified publications regarding AI risk management. The bill would require the AI risk management standard to include, among other things, a detailed risk assessment procedure for procuring automated decision systems (ADS), as defined, that analyzes specified characteristics of the ADS, methods for appropriate risk controls, as provided, and adverse incident monitoring procedures. The bill would require the department to to, among other things, collaborate with specified organizations to develop the AI risk management standard.This bill would, commencing six months after the date on which the regulations described in the paragraph above are approved and final, prohibit a state agency from entering into a contract for an ADS, or any service that utilizes an ADS, unless the contract includes a clause that, among other things, provides a completed risk assessment of the relevant ADS, as specified, requires adherence to appropriate risk controls, and provides procedures for adverse incident monitoring.Digest Key Vote: MAJORITY  Appropriation: NO  Fiscal Committee: YES  Local Program: NO Bill TextThe people of the State of California do enact as follows:SECTION 1. Section 12100.1 is added to the Public Contract Code, to read:12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.

 Amended IN  Assembly  June 21, 2024 Amended IN  Senate  April 10, 2024 Amended IN  Senate  April 01, 2024 CALIFORNIA LEGISLATURE 20232024 REGULAR SESSION Senate Bill No. 892Introduced by Senator Padilla(Coauthors: Senators Rubio and Smallwood-Cuevas)January 03, 2024An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.LEGISLATIVE COUNSEL'S DIGESTSB 892, as amended, Padilla. Public contracts: automated decision systems: AI risk management standards.Existing law requires all contracts for the acquisition of information technology goods and services related to information technology projects, as defined, to be made by or under the supervision of the Department of Technology. Existing law requires all other contracts for the acquisition of information technology goods or services to be made by or under the supervision of the Department of General Services. Under existing law, both the Department of Technology and the Department of General Services are authorized to delegate their authority to another agency, as specified.This bill would require the Department of Technology to develop and adopt regulations to create an artificial intelligence (AI) risk management standard, consistent with specified publications regarding AI risk management, and in accordance with the rulemaking provisions of the Administrative Procedure Act. as specified. To develop those regulations, the bill would authorize the department to apply principles and industry standards addressed in specified publications regarding AI risk management. The bill would require the AI risk management standard to include, among other things, a detailed risk assessment procedure for procuring automated decision systems (ADS), as defined, that analyzes specified characteristics of the ADS, methods for appropriate risk controls, as provided, and adverse incident monitoring procedures. The bill would require the department to to, among other things, collaborate with specified organizations to develop the AI risk management standard.This bill would, commencing six months after the date on which the regulations described in the paragraph above are approved and final, prohibit a state agency from entering into a contract for an ADS, or any service that utilizes an ADS, unless the contract includes a clause that, among other things, provides a completed risk assessment of the relevant ADS, as specified, requires adherence to appropriate risk controls, and provides procedures for adverse incident monitoring.Digest Key Vote: MAJORITY  Appropriation: NO  Fiscal Committee: YES  Local Program: NO 

 Amended IN  Assembly  June 21, 2024 Amended IN  Senate  April 10, 2024 Amended IN  Senate  April 01, 2024

Amended IN  Assembly  June 21, 2024
Amended IN  Senate  April 10, 2024
Amended IN  Senate  April 01, 2024

 CALIFORNIA LEGISLATURE 20232024 REGULAR SESSION

 Senate Bill 

No. 892

Introduced by Senator Padilla(Coauthors: Senators Rubio and Smallwood-Cuevas)January 03, 2024

Introduced by Senator Padilla(Coauthors: Senators Rubio and Smallwood-Cuevas)
January 03, 2024

An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.

LEGISLATIVE COUNSEL'S DIGEST

## LEGISLATIVE COUNSEL'S DIGEST

SB 892, as amended, Padilla. Public contracts: automated decision systems: AI risk management standards.

Existing law requires all contracts for the acquisition of information technology goods and services related to information technology projects, as defined, to be made by or under the supervision of the Department of Technology. Existing law requires all other contracts for the acquisition of information technology goods or services to be made by or under the supervision of the Department of General Services. Under existing law, both the Department of Technology and the Department of General Services are authorized to delegate their authority to another agency, as specified.This bill would require the Department of Technology to develop and adopt regulations to create an artificial intelligence (AI) risk management standard, consistent with specified publications regarding AI risk management, and in accordance with the rulemaking provisions of the Administrative Procedure Act. as specified. To develop those regulations, the bill would authorize the department to apply principles and industry standards addressed in specified publications regarding AI risk management. The bill would require the AI risk management standard to include, among other things, a detailed risk assessment procedure for procuring automated decision systems (ADS), as defined, that analyzes specified characteristics of the ADS, methods for appropriate risk controls, as provided, and adverse incident monitoring procedures. The bill would require the department to to, among other things, collaborate with specified organizations to develop the AI risk management standard.This bill would, commencing six months after the date on which the regulations described in the paragraph above are approved and final, prohibit a state agency from entering into a contract for an ADS, or any service that utilizes an ADS, unless the contract includes a clause that, among other things, provides a completed risk assessment of the relevant ADS, as specified, requires adherence to appropriate risk controls, and provides procedures for adverse incident monitoring.

Existing law requires all contracts for the acquisition of information technology goods and services related to information technology projects, as defined, to be made by or under the supervision of the Department of Technology. Existing law requires all other contracts for the acquisition of information technology goods or services to be made by or under the supervision of the Department of General Services. Under existing law, both the Department of Technology and the Department of General Services are authorized to delegate their authority to another agency, as specified.

This bill would require the Department of Technology to develop and adopt regulations to create an artificial intelligence (AI) risk management standard, consistent with specified publications regarding AI risk management, and in accordance with the rulemaking provisions of the Administrative Procedure Act. as specified. To develop those regulations, the bill would authorize the department to apply principles and industry standards addressed in specified publications regarding AI risk management. The bill would require the AI risk management standard to include, among other things, a detailed risk assessment procedure for procuring automated decision systems (ADS), as defined, that analyzes specified characteristics of the ADS, methods for appropriate risk controls, as provided, and adverse incident monitoring procedures. The bill would require the department to to, among other things, collaborate with specified organizations to develop the AI risk management standard.

This bill would, commencing six months after the date on which the regulations described in the paragraph above are approved and final, prohibit a state agency from entering into a contract for an ADS, or any service that utilizes an ADS, unless the contract includes a clause that, among other things, provides a completed risk assessment of the relevant ADS, as specified, requires adherence to appropriate risk controls, and provides procedures for adverse incident monitoring.

## Digest Key

## Bill Text

The people of the State of California do enact as follows:SECTION 1. Section 12100.1 is added to the Public Contract Code, to read:12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.

The people of the State of California do enact as follows:

## The people of the State of California do enact as follows:

SECTION 1. Section 12100.1 is added to the Public Contract Code, to read:12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.

SECTION 1. Section 12100.1 is added to the Public Contract Code, to read:

### SECTION 1.

12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.

12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.

12100.1. (a) For purposes of this section, the following definitions apply:(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.(3) Department means the Department of Technology.(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.(b) The department shall develop and adopt regulations to create an AI risk management standard.(1)The AI risk management standard shall be consistent with all of the following publications:(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.(2) The AI risk management standard shall include all of the following:(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:(i) Organizational and supply chain governance associated with the ADS.(ii) The purpose and use of the ADS.(iii) Any known potential misuses or abuses of the ADS.(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.(v) The robustness, accuracy, and reliability of the ADS.(vi) The interpretability and explainability of the ADS.(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.(C) Adverse incident monitoring procedures.(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.(E) A detailed equity assessment that analyzes, at a minimum, all of the following:(i) The individuals and communities that will interact with the high-risk ADS.(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.(iii) Any issues that may arise if the ADS is inaccurate.(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.(F) An assessment that analyzes the level of human oversight associated with the use of ADS.(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.(B) Consult with the California Privacy Protection Agency.(C) Solicit public comment on the risk management standard.(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.(3) Provides procedures for adverse incident monitoring.(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.



12100.1. (a) For purposes of this section, the following definitions apply:

(1) Artificial intelligence or AI means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.

(2) (A) Automated decision system or ADS means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.

(B) Automated decision system does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.

(3) Department means the Department of Technology.

(4) High-risk automated decision system or high-risk ADS means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, free speech, housing or accommodations, education, employment, credit, health care, child welfare, immigration, and criminal justice.

(b) The department shall develop and adopt regulations to create an AI risk management standard.

(1)The AI risk management standard shall be consistent with all of the following publications:



(1) To develop regulations related to the AI risk management standard, the department may apply principles and industry standards addressed in relevant publications, including, but not limited to, any of the following:

(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.

(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.

(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.

(D) The Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum, published by the Executive Office of the President, Office of Management and Budget, dated March 28, 2024.

(2) The AI risk management standard shall include all of the following:

(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:

(i) Organizational and supply chain governance associated with the ADS.

(ii) The purpose and use of the ADS.

(iii) Any known potential misuses or abuses of the ADS.

(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.

(v) The robustness, accuracy, and reliability of the ADS.

(vi) The interpretability and explainability of the ADS.

(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.

(C) Adverse incident monitoring procedures.

(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.

(E)An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.



(E) A detailed equity assessment that analyzes, at a minimum, all of the following:

(i) The individuals and communities that will interact with the high-risk ADS.

(ii) How the information or decisions generated by the ADS will impact an individuals rights, freedoms, economic status, health, health care, or well-being.

(iii) Any issues that may arise if the ADS is inaccurate.

(iv) How users of diverse abilities will interact with the user interface of the ADS and whether the ADS integrates and interacts with commonly used assistive technologies.

(F) An assessment that analyzes the level of human oversight associated with the use of ADS.

(G) Adherence to data minimization standards, including that an AI or ADS vendor shall only use information provided by or obtained from an agency to provide the specific service authorized by the agency. Further, the data collected may not be used for training of proprietary vendor or third-party systems.

(3) To develop the AI risk management standard, the department shall collaborate comply with all of the following:

(A) Collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.

(B) Consult with the California Privacy Protection Agency.

(C) Solicit public comment on the risk management standard.

(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.

(c) Commencing six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:

(1) Provides a completed risk assessment of the relevantADS. ADS that analyzes the items included in subparagraph (A) of paragraph (2) of subdivision (b).

(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.

(3) Provides procedures for adverse incident monitoring.

(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.

(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.

(6) Provides a termination right in the event of a significant breach of responsibility or violation by the vendor.