California 2025-2026 Regular Session

California Senate Bill SB813 Latest Draft

Bill / Amended Version Filed 03/26/2025

                            Amended IN  Senate  March 26, 2025 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION Senate Bill No. 813Introduced by Senator McNerneyFebruary 21, 2025An act to amend Section 2570.18.5 of the Business and Professions Code, relating to healing arts. add Chapter 14 (commencing with Section 8898) to Division 1 of Title 2 of the Government Code, relating to artificial intelligence.LEGISLATIVE COUNSEL'S DIGESTSB 813, as amended, McNerney. Occupational therapy. Multistakeholder regulatory organizations.Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered providers GenAI system that, among other things, identifies content as AI-generated content.This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.This bill would provide that in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries.Existing law, the Occupational Therapy Practice Act, establishes the California Board of Occupational Therapy for the licensure and regulation of the practice of occupational therapy. Existing law prohibits a person from practicing occupational therapy or working as an occupational therapy assistant under the supervision of an occupational therapist without being licensed under the act.Existing law requires an occupational therapist to document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record. Existing law further requires client records to be maintained for a period of not less than 7 years following the discharge of the client, except as specified.This bill would increase the above timeframe to 10 years following discharge of the client.Digest Key Vote: MAJORITY  Appropriation: NO  Fiscal Committee: YES  Local Program: NO Bill TextThe people of the State of California do enact as follows:SECTION 1. The Legislature finds and declares all of the following:(a) A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.(b) By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.(c) Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.(d) Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.(e) Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.(f) Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.SEC. 2. Chapter 14 (commencing with Section 8898) is added to Division 1 of Title 2 of the Government Code, to read: CHAPTER 14. Multistakeholder Regulatory Organizations8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.SECTION 1.Section 2570.18.5 of the Business and Professions Code is amended to read:2570.18.5.(a)An occupational therapist shall document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record.(b)An occupational therapy assistant shall document the services provided in the client record.(c)Occupational therapists and occupational therapy assistants shall document and sign the client record legibly.(d)Client records shall be maintained for a period of no less than 10 years following the discharge of the client, except that the records of unemancipated minors shall be maintained at least one year after the minor has reached the age of 18 years, and not in any case less than seven years.

 Amended IN  Senate  March 26, 2025 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION Senate Bill No. 813Introduced by Senator McNerneyFebruary 21, 2025An act to amend Section 2570.18.5 of the Business and Professions Code, relating to healing arts. add Chapter 14 (commencing with Section 8898) to Division 1 of Title 2 of the Government Code, relating to artificial intelligence.LEGISLATIVE COUNSEL'S DIGESTSB 813, as amended, McNerney. Occupational therapy. Multistakeholder regulatory organizations.Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered providers GenAI system that, among other things, identifies content as AI-generated content.This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.This bill would provide that in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries.Existing law, the Occupational Therapy Practice Act, establishes the California Board of Occupational Therapy for the licensure and regulation of the practice of occupational therapy. Existing law prohibits a person from practicing occupational therapy or working as an occupational therapy assistant under the supervision of an occupational therapist without being licensed under the act.Existing law requires an occupational therapist to document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record. Existing law further requires client records to be maintained for a period of not less than 7 years following the discharge of the client, except as specified.This bill would increase the above timeframe to 10 years following discharge of the client.Digest Key Vote: MAJORITY  Appropriation: NO  Fiscal Committee: YES  Local Program: NO 

 Amended IN  Senate  March 26, 2025

Amended IN  Senate  March 26, 2025

 CALIFORNIA LEGISLATURE 20252026 REGULAR SESSION

 Senate Bill 

No. 813

Introduced by Senator McNerneyFebruary 21, 2025

Introduced by Senator McNerney
February 21, 2025

An act to amend Section 2570.18.5 of the Business and Professions Code, relating to healing arts. add Chapter 14 (commencing with Section 8898) to Division 1 of Title 2 of the Government Code, relating to artificial intelligence.

LEGISLATIVE COUNSEL'S DIGEST

## LEGISLATIVE COUNSEL'S DIGEST

SB 813, as amended, McNerney. Occupational therapy. Multistakeholder regulatory organizations.

Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered providers GenAI system that, among other things, identifies content as AI-generated content.This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.This bill would provide that in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries.Existing law, the Occupational Therapy Practice Act, establishes the California Board of Occupational Therapy for the licensure and regulation of the practice of occupational therapy. Existing law prohibits a person from practicing occupational therapy or working as an occupational therapy assistant under the supervision of an occupational therapist without being licensed under the act.Existing law requires an occupational therapist to document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record. Existing law further requires client records to be maintained for a period of not less than 7 years following the discharge of the client, except as specified.This bill would increase the above timeframe to 10 years following discharge of the client.

Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered providers GenAI system that, among other things, identifies content as AI-generated content.

This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.

This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.

This bill would provide that in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries.

Existing law, the Occupational Therapy Practice Act, establishes the California Board of Occupational Therapy for the licensure and regulation of the practice of occupational therapy. Existing law prohibits a person from practicing occupational therapy or working as an occupational therapy assistant under the supervision of an occupational therapist without being licensed under the act.



Existing law requires an occupational therapist to document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record. Existing law further requires client records to be maintained for a period of not less than 7 years following the discharge of the client, except as specified.



This bill would increase the above timeframe to 10 years following discharge of the client.



## Digest Key

## Bill Text

The people of the State of California do enact as follows:SECTION 1. The Legislature finds and declares all of the following:(a) A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.(b) By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.(c) Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.(d) Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.(e) Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.(f) Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.SEC. 2. Chapter 14 (commencing with Section 8898) is added to Division 1 of Title 2 of the Government Code, to read: CHAPTER 14. Multistakeholder Regulatory Organizations8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.SECTION 1.Section 2570.18.5 of the Business and Professions Code is amended to read:2570.18.5.(a)An occupational therapist shall document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record.(b)An occupational therapy assistant shall document the services provided in the client record.(c)Occupational therapists and occupational therapy assistants shall document and sign the client record legibly.(d)Client records shall be maintained for a period of no less than 10 years following the discharge of the client, except that the records of unemancipated minors shall be maintained at least one year after the minor has reached the age of 18 years, and not in any case less than seven years.

The people of the State of California do enact as follows:

## The people of the State of California do enact as follows:

SECTION 1. The Legislature finds and declares all of the following:(a) A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.(b) By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.(c) Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.(d) Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.(e) Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.(f) Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.

SECTION 1. The Legislature finds and declares all of the following:(a) A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.(b) By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.(c) Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.(d) Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.(e) Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.(f) Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.

SECTION 1. The Legislature finds and declares all of the following:

### SECTION 1.

(a) A multistakeholder regulatory organization (MRO) tasked with defining standards based on best practices and certifying adherence to them is an agile, public-private model designed to promote innovation, ensure the security of artificial intelligence (AI) platforms, reduce regulatory uncertainty, and build societal trust.

(b) By proactively setting clear standards, creating tailored pathways for both established companies and emerging developers, and offering legal and economic incentives, the MRO transforms compliance into a competitive advantage. It is not just about managing risks; it is about accelerating responsible growth and empowering businesses to confidently innovate and thrive in an AI-driven economy. Compliance with established standards confers a strong market advantage.

(c) Leveraging private sector and government cooperation to achieve what would otherwise require regulations is a proven approach that utilizes all available expertise while enhancing transparency among industry players, policymakers, and the public.

(d) Rather than relying on government agencies, semiprivate standards organizations with sector-specific expertise can better accommodate diverse market participants, varied technology use cases, and aligned public-private interests. This approach creates an adaptable and predictable compliance mechanism that ensures AI governance standards can evolve quickly alongside technological advancements.

(e) Legal safeguards are effective means to incentivize responsible AI development and prevent unnecessary harm. Reducing litigation risk encourages investment and fosters innovation. It also promotes heightened care and adherence to best practices while maintaining accountability and effectively balancing risk mitigation with consumer and public protection.

(f) Public opinion research shows that while the public wants government to help establish guardrails for AI, a majority believe the government alone is incapable of effectively establishing these guardrails. An MRO surpasses traditional regulation by incentivizing a race to the top for transparency and safety that prioritizes innovation and adaptability and serves as a central, informed voice to governments and society for responsive, forward-looking governance.

SEC. 2. Chapter 14 (commencing with Section 8898) is added to Division 1 of Title 2 of the Government Code, to read: CHAPTER 14. Multistakeholder Regulatory Organizations8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.

SEC. 2. Chapter 14 (commencing with Section 8898) is added to Division 1 of Title 2 of the Government Code, to read:

### SEC. 2.

 CHAPTER 14. Multistakeholder Regulatory Organizations8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.

 CHAPTER 14. Multistakeholder Regulatory Organizations8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.

 CHAPTER 14. Multistakeholder Regulatory Organizations

 CHAPTER 14. Multistakeholder Regulatory Organizations

8898. As used in this chapter:(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.(e) Plan means a plan submitted pursuant to Section 8898.2.(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.



8898. As used in this chapter:

(a) Artificial intelligence application means a software program or system that uses artificial intelligence models to perform tasks that typically require human intelligence.

(b) Artificial intelligence model means an engineered or machine-based system that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

(c) Developer means a person who develops an artificial intelligence model or artificial intelligence application that is deployed in the state.

(d) Multistakeholder regulatory organization (MRO) means an entity designated as an MRO by the Attorney General pursuant to this chapter that performs the functions specified in Section 8898.3, including certification of developers exercise of heightened care and compliance with standards based on best practices for the prevention of personal injury and property damage with respect to an artificial intelligence model or application.

(e) Plan means a plan submitted pursuant to Section 8898.2.

(f) Security vendor means a third-party entity engaged by an MRO or developer to evaluate the safety and security of an artificial intelligence model or application through processes that include red teaming, risk detection, and risk mitigation.

8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:(1) The applicants personnel and the qualifications of those personnel.(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.(3) The applicants independence from the artificial intelligence industry.(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.(d) The Attorney General may revoke a designation if any of the following are true:(1) The MROs plan is materially misleading or inaccurate.(2) The MRO systematically fails to adhere to its plan.(3) A material change compromises the MROs independence from the artificial intelligence industry.(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.



8898.1. (a) The Attorney General shall designate one or more MROs pursuant to this chapter.

(b) In complying with subdivision (a), the Attorney General shall determine whether an applicant MROs plan ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications by considering all of the following:

(1) The applicants personnel and the qualifications of those personnel.

(2) The quality of the applicants plan with respect to ensuring that artificial intelligence model and application developers exercise heightened care and comply with best practice-based standards for the prevention of personal injury and property damage, considering factors including, but not limited to, both of the following:

(A) The viability and rigor of the applicants evaluation methods, technologies, and administrative procedures.

(B) The adequacy of the applicants plan to develop measurable standards for evaluating artificial intelligence developers mitigation of risks.

(3) The applicants independence from the artificial intelligence industry.

(4) Whether the applicant serves a particular existing or potential artificial intelligence industry segment.

(c) A designation as an MRO under this section shall expire after three years, and the MRO may apply for a new designation.

(d) The Attorney General may revoke a designation if any of the following are true:

(1) The MROs plan is materially misleading or inaccurate.

(2) The MRO systematically fails to adhere to its plan.

(3) A material change compromises the MROs independence from the artificial intelligence industry.

(4) Evolution of technology renders the MROs methods obsolete for ensuring acceptable levels of risk of personal injury and property damage.

(5) An artificial intelligence model or artificial intelligence application certified by the MRO causes a significant harm.

8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:(A) Aggregating and tracking evaluation data from certified labs.(B) Categories of metadata to be aggregated and tracked.(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.(7) Implementation and enforcement of whistleblower protections among certified developers.(8) Remediation of postcertification noncompliance.(9) An approach to reporting of societal risks and benefits identified through auditing.(10) An approach to interfacing effectively with federal and non-California state authorities.(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:(1) The applicants board composition.(2) The availability of resources to implement the applicants plan.(3) The applicants funding sources.(4) Representation of civil society representatives in evaluation and reporting functions.(d) The Attorney General shall not modify a plan submitted pursuant to this section.



8898.2. (a) An applicant to the Attorney General for designation as an MRO shall submit with its application a plan that contains all of the following elements:

(1) The applicants approach to auditing of artificial intelligence models and artificial intelligence applications to verify that an artificial intelligence developer has exercised heightened care and adhered to predeployment and postdeployment best practices and procedures to prevent personal injury or property damage caused by the artificial intelligence model or artificial intelligence application.

(2) The applicants approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.

(3) An approach to ensuring disclosure by developers to the MRO of risks detected, incident reports, and risk mitigation efforts.

(4) An approach to specifying the scope and duration of certification of an artificial intelligence model or artificial intelligence application, including technical thresholds for updates requiring renewed certification.

(5) An approach to data collection for public reporting from audited developers and vendors that addresses all of the following:

(A) Aggregating and tracking evaluation data from certified labs.

(B) Categories of metadata to be aggregated and tracked.

(C) Measures to protect trade secrets and mitigate antitrust risk from information sharing.

(6) The applicants intended use, if any, of security vendors to evaluate artificial intelligence developers, models, or applications, including a method of certifying and training vendors to accurately evaluate an artificial intelligence model or developer exercising heightened care and complying with best practices.

(7) Implementation and enforcement of whistleblower protections among certified developers.

(8) Remediation of postcertification noncompliance.

(9) An approach to reporting of societal risks and benefits identified through auditing.

(10) An approach to interfacing effectively with federal and non-California state authorities.

(b) The plan submitted pursuant to this section may be tailored to a particular artificial intelligence market segment.

(c) An applicant shall annually audit all of the following to ensure independence from the artificial intelligence industry and report the findings of its audit to the Attorney General:

(1) The applicants board composition.

(2) The availability of resources to implement the applicants plan.

(3) The applicants funding sources.

(4) Representation of civil society representatives in evaluation and reporting functions.

(d) The Attorney General shall not modify a plan submitted pursuant to this section.

8898.3. An MRO designated pursuant to this chapter shall do all of the following:(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.(b) Implement the plan submitted pursuant to Section 8898.2.(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.(3) Developer and security vendor certifications.(4) Aggregated results of certification assessments.(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.(e) Retain for 10 years a document that is related to the MROs activities under this chapter.



8898.3. An MRO designated pursuant to this chapter shall do all of the following:

(a) Certify developers and security vendors exercise of heightened care and compliance with best practices for the prevention of personal injury and property damage.

(b) Implement the plan submitted pursuant to Section 8898.2.

(c) Decertify an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO.

(d) Submit to the Legislature, pursuant to Section 9795, and to the Attorney General an annual report that addresses all of the following:

(1) Aggregated information on capabilities of artificial intelligence models, the observed societal risks and benefits associated with those capabilities, and the potential societal risks and benefits associated with those capabilities.

(2) The adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.

(3) Developer and security vendor certifications.

(4) Aggregated results of certification assessments.

(5) Remedial measures prescribed by the MRO and whether the developer or security vendor complied with those measures.

(6) Identified additional risks outside personal injury or property damage and the adequacy of existing mitigation measures to address those risks.

(e) Retain for 10 years a document that is related to the MROs activities under this chapter.

8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries. (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.



8898.4. (a) In a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it shall be an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiffs injuries.

 (b) The affirmative defense recognized in this section shall not apply to claims of intentional misconduct by the defendant.





(a)An occupational therapist shall document the occupational therapists evaluation, goals, treatment plan, and summary of treatment in the client record.



(b)An occupational therapy assistant shall document the services provided in the client record.



(c)Occupational therapists and occupational therapy assistants shall document and sign the client record legibly.



(d)Client records shall be maintained for a period of no less than 10 years following the discharge of the client, except that the records of unemancipated minors shall be maintained at least one year after the minor has reached the age of 18 years, and not in any case less than seven years.