Connecticut 2024 2024 Regular Session

Connecticut Senate Bill SB00002 Comm Sub / Bill

Filed 02/21/2024

                     
 
LCO No. 1489   	1 of 37 
 
General Assembly  Committee Bill No. 2  
February Session, 2024  
LCO No. 1489 
 
 
Referred to Committee on GENERAL LAW  
 
 
Introduced by:  
(GL)  
 
 
 
 
 
AN ACT CONCERNING ARTIFICIAL INTELLIGENCE. 
Be it enacted by the Senate and House of Representatives in General 
Assembly convened: 
 
Section 1. (NEW) (Effective October 1, 2024) For the purposes of this 1 
section and sections 2 to 7, inclusive, of this act, unless the context 2 
otherwise requires: 3 
(1) "Algorithmic discrimination" means any condition in which an 4 
automated decision tool materially increases the risk of any unjustified 5 
differential treatment or impact that disfavors any individual or group 6 
of individuals on the basis of their actual or perceived age, color, 7 
disability, ethnicity, genetic information, limited proficiency in the 8 
English language, national origin, race, religion, reproductive health, 9 
sex, veteran status or other classification protected under the laws of this 10 
state; 11 
(2) "Artificial intelligence" means any technology, including, but not 12 
limited to, machine learning, that uses data to train an algorithm or 13 
predictive model for the purpose of enabling a computer system or 14    
Committee Bill No.  2 
 
 
LCO No. 1489   	2 of 37 
 
service to autonomously perform any task, including, but not limited to, 15 
visual perception, language processing or speech recognition, that is 16 
normally associated with human intelligence or perception; 17 
(3) "Artificial intelligence system" means any machine-based system 18 
that, for any explicit or implicit objective, infers from the inputs such 19 
system receives how to generate outputs, including, but not limited to, 20 
content, decisions, predictions or recommendations, that can influence 21 
physical or virtual environments; 22 
(4) "Automated decision tool" means any service or system that (A) 23 
uses artificial intelligence, and (B) has been specifically developed and 24 
marketed, or specifically modified, to make, or be a controlling factor in 25 
making, any consequential decision; 26 
(5) "Consequential decision" means any decision that has a material 27 
legal or similarly significant effect on any consumer's access to, or the 28 
availability, cost or terms of, any criminal justice, education enrollment 29 
or opportunity, employment or employment opportunity, essential 30 
good or service, financial or lending service, government service, health 31 
care service, housing, insurance or legal service; 32 
(6) "Consumer" means any individual who is a resident of this state; 33 
(7) "Deploy" means to use a generative artificial intelligence system 34 
or high-risk artificial intelligence system; 35 
(8) "Deployer" means any person doing business in this state that 36 
deploys (A) a generative artificial intelligence system, or (B) a high-risk 37 
artificial intelligence system; 38 
(9) "Developer" means any person doing business in this state that 39 
develops, or intentionally and substantially modifies, (A) a generative 40 
artificial intelligence system, or (B) a high-risk artificial intelligence 41 
system; 42 
(10) "General purpose artificial intelligence model" (A) means any 43    
Committee Bill No.  2 
 
 
LCO No. 1489   	3 of 37 
 
form of artificial intelligence system that (i) displays significant 44 
generality, (ii) is capable of competently performing a wide range of 45 
distinct tasks, and (iii) can be integrated into a variety of downstream 46 
applications or systems, and (B) does not include any artificial 47 
intelligence model that is used for development, prototyping and 48 
research activities before such model is released on the market; 49 
(11) "Generative artificial intelligence system" means any artificial 50 
intelligence system, including, but not limited to, a general purpose 51 
artificial intelligence model, that is able to produce synthetic digital 52 
content; 53 
(12) "High-risk artificial intelligence system" means any artificial 54 
intelligence system that, when deployed, makes, or is a controlling 55 
factor in making, a consequential decision; 56 
(13) "Intentional and substantial modification" means any deliberate 57 
change made to (A) a generative artificial intelligence system, other than 58 
a change made to a generative artificial intelligence system as a result of 59 
learning after the generative artificial intelligence system has been 60 
deployed, that (i) affects compliance of the generative artificial 61 
intelligence system, or (ii) changes the purpose of the generative 62 
artificial intelligence system, or (B) a high-risk artificial intelligence 63 
system that creates, or potentially creates, any new risk of algorithmic 64 
discrimination; 65 
(14) "Machine learning" means any technique that enables a computer 66 
system or service to autonomously learn and adapt by using algorithms 67 
and statistical models to autonomously analyze and draw inferences 68 
from patterns in data; 69 
(15) "Person" means any individual, association, corporation, limited 70 
liability company, partnership, trust or other legal entity; 71 
(16) "Synthetic digital content" means any digital content, including, 72 
but not limited to, any audio, image, text or video, that is produced by 73    
Committee Bill No.  2 
 
 
LCO No. 1489   	4 of 37 
 
a generative artificial intelligence system; and 74 
(17) "Trade secret" has the same meaning as provided in section 35-75 
51 of the general statutes. 76 
Sec. 2. (NEW) (Effective October 1, 2024) (a) Beginning on July 1, 2025, 77 
each developer of a high-risk artificial intelligence system shall use 78 
reasonable care to protect consumers from any known or reasonably 79 
foreseeable risks of algorithmic discrimination. In any enforcement 80 
action brought on or after said date by the Attorney General pursuant 81 
to section 7 of this act, there shall be a rebuttable presumption that a 82 
developer used reasonable care as required under this subsection if the 83 
developer complied with the provisions of this section. 84 
(b) Beginning on July 1, 2025, and except as provided in subsection 85 
(f) of this section, no developer of a high-risk artificial intelligence 86 
system shall offer, sell, lease, license, give or otherwise provide to a 87 
deployer a high-risk artificial intelligence system unless the developer 88 
makes available to the deployer: 89 
(1) A general statement describing the intended uses of such high-90 
risk artificial intelligence system; and 91 
(2) Documentation (A) disclosing (i) all known or reasonably 92 
foreseeable limitations of such high-risk artificial intelligence system, 93 
including, but not limited to, any known or reasonably foreseeable risks 94 
of algorithmic discrimination arising from the intended uses of such 95 
high-risk artificial intelligence system, (ii) the purpose of such high-risk 96 
artificial intelligence system and the intended benefits, uses and 97 
deployment contexts of such high-risk artificial intelligence system, and 98 
(iii) a summary of the type of data, if any, intended to be collected from 99 
consumers and processed by such high-risk artificial intelligence system 100 
when such high-risk artificial intelligence system is deployed, and (B) 101 
describing (i) the type of data used to program or train such high-risk 102 
artificial intelligence system, (ii) how such high-risk artificial 103 
intelligence system was evaluated for performance and relevant 104    
Committee Bill No.  2 
 
 
LCO No. 1489   	5 of 37 
 
information related to explainability before such high-risk artificial 105 
intelligence system was offered, sold, leased, licensed, given or 106 
otherwise provided to a deployer, (iii) the data governance measures 107 
used to cover the training datasets and the measures used to examine 108 
the suitability of data sources, possible biases and appropriate 109 
mitigation, (iv) the intended outputs of such high-risk artificial 110 
intelligence system, (v) the measures the developer has taken to mitigate 111 
any known or reasonably foreseeable risks of algorithmic discrimination 112 
that may arise from deployment of such high-risk artificial intelligence 113 
system, and (vi) how a consumer can use or monitor such high-risk 114 
artificial intelligence system when such high-risk artificial intelligence 115 
system is deployed. 116 
(c) Except as provided in subsection (f) of this section, each developer 117 
that offers, sells, leases, licenses, gives or otherwise makes available to a 118 
deployer a high-risk artificial intelligence system on or after July 1, 2025, 119 
shall provide to the deployer, as technically feasible, through artifacts 120 
such as model cards, dataset cards or impact assessments, all material 121 
information and documentation in the developer's possession, custody 122 
or control that the deployer, or a third party contracted by the deployer, 123 
may reasonably require to complete an impact assessment pursuant to 124 
subsection (c) of section 3 of this act. 125 
(d) (1) Beginning on July 1, 2025, each developer shall make available, 126 
in a manner that is clear and readily available for public inspection on 127 
such developer's Internet web site or in a public use case inventory, a 128 
statement summarizing: 129 
(A) The types of high-risk artificial intelligence systems that such 130 
developer (i) has developed or intentionally and substantially modified, 131 
and (ii) currently makes available to deployers; and 132 
(B) How such developer manages known or reasonably foreseeable 133 
risks of algorithmic discrimination arising from development or 134 
intentional and substantial modification of the types of high-risk 135 
artificial intelligence systems described in subparagraph (A) of this 136    
Committee Bill No.  2 
 
 
LCO No. 1489   	6 of 37 
 
subdivision. 137 
(2) Each developer shall update the statement described in 138 
subdivision (1) of this subsection (i) as necessary to ensure that such 139 
statement remains accurate, and (ii) not later than ninety days after the 140 
developer intentionally and substantially modifies any high-risk 141 
artificial intelligence system described in subparagraph (A) of 142 
subdivision (1) of this subsection. 143 
(e) Beginning on July 1, 2025, if the developer of a high-risk artificial 144 
intelligence system is informed by a deployer, or discovers through such 145 
developer's ongoing testing and analysis, that such developer's high-146 
risk artificial intelligence system has been deployed and caused, or is 147 
reasonably likely to have caused, algorithmic discrimination, the 148 
developer shall, not later than ninety days after the date of such 149 
discovery, disclose to the Attorney General and all known deployers of 150 
such high-risk artificial intelligence system any known or reasonably 151 
foreseeable risk of algorithmic discrimination arising from the intended 152 
uses of such high-risk artificial intelligence system. 153 
(f) Nothing in subsections (b) to (e), inclusive, of this section shall be 154 
construed to require a developer to disclose any trade secret or other 155 
confidential or proprietary information. 156 
(g) Beginning on July 1, 2025, the Attorney General may require that 157 
a developer disclose to the Attorney General, in a form and manner 158 
prescribed by the Attorney General, any statement or documentation 159 
described in subsection (b) of this section if such statement or 160 
documentation is relevant to an investigation conducted by the 161 
Attorney General. The Attorney General may evaluate such statement 162 
or documentation to ensure compliance with the provisions of this 163 
section, and such statement or documentation shall be exempt from 164 
disclosure under the Freedom of Information Act, as defined in section 165 
1-200 of the general statutes. To the extent any information contained in 166 
any such statement or documentation includes any information subject 167 
to the attorney-client privilege or work product protection, such 168    
Committee Bill No.  2 
 
 
LCO No. 1489   	7 of 37 
 
disclosure shall not constitute a waiver of such privilege or protection. 169 
Sec. 3. (NEW) (Effective October 1, 2024) (a) Beginning on July 1, 2025, 170 
each deployer shall use reasonable care to protect consumers from any 171 
known or reasonably foreseeable risks of algorithmic discrimination. In 172 
any enforcement action brought on or after said date by the Attorney 173 
General pursuant to section 7 of this act, there shall be a rebuttable 174 
presumption that a deployer used reasonable care as required under 175 
this subsection if the deployer complied with the provisions of 176 
subsections (b) to (e), inclusive, of this section. 177 
(b) (1) Beginning on July 1, 2025, no deployer shall deploy a high-risk 178 
artificial intelligence system unless the deployer has implemented a risk 179 
management policy and program. The risk management policy and 180 
program shall specify and incorporate the principles, processes and 181 
personnel that the deployer shall use to identify, document and 182 
eliminate any known or reasonably foreseeable risks of algorithmic 183 
discrimination. Each risk management policy and program 184 
implemented and maintained pursuant to this subsection shall be 185 
reasonable considering: 186 
(A) (i) The guidance and standards set forth in the latest version of 187 
the "Artificial Intelligence Risk Management Framework" published by 188 
the National Institute of Standards and Technology or another 189 
nationally or internationally recognized risk management framework 190 
for artificial intelligence systems; or 191 
(ii) Any risk management framework for artificial intelligence 192 
systems that the Attorney General, in the Attorney General's discretion, 193 
may designate; 194 
(B) The size and complexity of the deployer; 195 
(C) The nature and scope of the high-risk artificial intelligence 196 
systems deployed by the deployer, including, but not limited to, the 197 
intended uses of such high-risk artificial intelligence systems; and 198    
Committee Bill No.  2 
 
 
LCO No. 1489   	8 of 37 
 
(D) The sensitivity and volume of data processed in connection with 199 
the high-risk artificial intelligence systems deployed by the deployer. 200 
(2) A risk management policy and program implemented pursuant 201 
to subdivision (1) of this subsection may cover multiple high-risk 202 
artificial intelligence systems deployed by the deployer. 203 
(c) (1) Except as provided in subdivisions (3) and (4) of this 204 
subsection: 205 
(A) A deployer that deploys a high-risk artificial intelligence system 206 
on or after July 1, 2025, or a third party contracted by the deployer, shall 207 
complete an impact assessment for the high-risk artificial intelligence 208 
system; and 209 
(B) Beginning on July 1, 2025, a deployer, or a third party contracted 210 
by the deployer, shall complete an impact assessment for a deployed 211 
high-risk artificial intelligence system not later than ninety days after 212 
any intentional and substantial modification to such high-risk artificial 213 
intelligence system is made available. 214 
(2) (A) Each impact assessment completed pursuant to this subsection 215 
shall include, at a minimum: 216 
(i) A statement by the deployer disclosing the purpose, intended use 217 
cases and deployment context of, and benefits afforded by, the high-risk 218 
artificial intelligence system; 219 
(ii) An analysis of whether the deployment of the high-risk artificial 220 
intelligence system poses any known or reasonably foreseeable risks of 221 
algorithmic discrimination and, if so, the nature of such algorithmic 222 
discrimination and the steps that have been taken to eliminate such 223 
risks; 224 
(iii) A description of (I) the categories of data the high-risk artificial 225 
intelligence system processes as inputs, and (II) the outputs such high-226 
risk artificial intelligence system produces; 227    
Committee Bill No.  2 
 
 
LCO No. 1489   	9 of 37 
 
(iv) If the deployer used data to customize the high-risk artificial 228 
intelligence system, an overview of the categories of data the deployer 229 
used to retrain such high-risk artificial intelligence system; 230 
(v) Any metrics used to evaluate the performance and known 231 
limitations of the high-risk artificial intelligence system; 232 
(vi) A description of any transparency measures taken concerning the 233 
high-risk artificial intelligence system, including, but not limited to, any 234 
measures taken to disclose to a consumer that such high-risk artificial 235 
intelligence system is in use when such high-risk artificial intelligence 236 
system is in use; and 237 
(vii) A description of the post-deployment monitoring and user 238 
safeguards provided concerning such high-risk artificial intelligence 239 
system, including, but not limited to, the oversight process established 240 
by the deployer to address issues arising from deployment of such high-241 
risk artificial intelligence system as such issues arise. 242 
(B) In addition to the statement, analysis, descriptions, overview and 243 
metrics required under subparagraph (A) of this subdivision, each 244 
impact assessment completed pursuant to this subsection following an 245 
intentional and substantial modification made to a high-risk artificial 246 
intelligence system on or after July 1, 2025, shall include a statement 247 
disclosing the extent to which the high-risk artificial intelligence system 248 
was used in a manner that was consistent with, or varied from, the 249 
developer's intended uses of such high-risk artificial intelligence 250 
system. 251 
(3) A single impact assessment may address a comparable set of high-252 
risk artificial intelligence systems deployed by a deployer. 253 
(4) If a deployer, or a third party contracted by the deployer, 254 
completes an impact assessment for the purpose of complying with 255 
another applicable law or regulation, such impact assessment shall be 256 
deemed to satisfy the requirements established in this subsection if such 257    
Committee Bill No.  2 
 
 
LCO No. 1489   	10 of 37 
 
impact assessment is reasonably similar in scope and effect to the impact 258 
assessment that would otherwise be completed pursuant to this 259 
subsection. 260 
(5) A deployer shall maintain the most recently completed impact 261 
assessment for a high-risk artificial intelligence system as required 262 
under this subsection, all records concerning each such impact 263 
assessment and all prior impact assessments, if any, for a period of at 264 
least three years following the final deployment of the high-risk artificial 265 
intelligence system. 266 
(d) Beginning on July 1, 2025, a deployer, or a third party contracted 267 
by the deployer, shall review, at least annually, the deployment of each 268 
high-risk artificial intelligence system deployed by the deployer to 269 
ensure that such high-risk artificial intelligence system is not causing 270 
algorithmic discrimination. 271 
(e) (1) Beginning on July 1, 2025, and not later than the time that a 272 
deployer deploys a high-risk artificial intelligence system to make, or be 273 
a controlling factor in making, a consequential decision concerning a 274 
consumer, the deployer shall: 275 
(A) Notify the consumer that the deployer has deployed a high-risk 276 
artificial intelligence system to make, or be a controlling factor in 277 
making, such consequential decision; and 278 
(B) Provide to the consumer (i) a statement disclosing (I) the purpose 279 
of such high-risk artificial intelligence system, and (II) the nature of such 280 
consequential decision, (ii) contact information for such deployer, and 281 
(iii) a description, in plain language, of such high-risk artificial 282 
intelligence system, which description shall, at a minimum, include a 283 
description of (I) any human components of such high-risk artificial 284 
intelligence system, and (II) how any automated components of such 285 
high-risk artificial intelligence system are used to inform such 286 
consequential decision. 287    
Committee Bill No.  2 
 
 
LCO No. 1489   	11 of 37 
 
(2) A deployer may provide to a consumer the notice, statement, 288 
contact information and description required under subdivision (1) of 289 
this subsection in any manner that is clear and readily available. 290 
(f) (1) Beginning on July 1, 2025, each deployer shall make available, 291 
in a manner that is clear and readily available for public inspection, a 292 
statement summarizing: 293 
(A) The types of high-risk artificial intelligence systems that are 294 
currently deployed by such deployer; and 295 
(B) How such deployer manages any known or reasonably 296 
foreseeable risks of algorithmic discrimination that may arise from 297 
deployment of each high-risk artificial intelligence system described in 298 
subparagraph (A) of this subdivision. 299 
(2) Each deployer shall periodically update the statement described 300 
in subdivision (1) of this subsection. 301 
(g) If a deployer deploys a high-risk artificial intelligence system on 302 
or after July 1, 2025, and subsequently discovers that the high-risk 303 
artificial intelligence system has caused, or is reasonably likely to have 304 
caused, algorithmic discrimination against consumers, the deployer 305 
shall, not later than ninety days after the date of such discovery, send to 306 
the Attorney General, in a form and manner prescribed by the Attorney 307 
General, a notice disclosing such discovery. 308 
(h) Nothing in subsections (b) to (g), inclusive, of this section shall be 309 
construed to require a deployer to disclose any trade secret. 310 
(i) Beginning on July 1, 2025, the Attorney General may require that 311 
a deployer, or the third party contracted by the deployer as set forth in 312 
subsection (c) of this section, as applicable, disclose to the Attorney 313 
General, in a form and manner prescribed by the Attorney General, any 314 
risk management policy implemented pursuant to subsection (b) of this 315 
section, impact assessment completed pursuant to subsection (c) of this 316 
section or record maintained pursuant to subdivision (5) of subsection 317    
Committee Bill No.  2 
 
 
LCO No. 1489   	12 of 37 
 
(c) of this section if such risk management policy, impact assessment or 318 
record is relevant to an investigation conducted by the Attorney 319 
General. The Attorney General may evaluate such risk management 320 
policy, impact assessment or record to ensure compliance with the 321 
provisions of this section, and such risk management policy, impact 322 
assessment or record shall be exempt from disclosure under the 323 
Freedom of Information Act, as defined in section 1-200 of the general 324 
statutes. To the extent any information contained in any such risk 325 
management policy, impact assessment or record includes any 326 
information subject to the attorney-client privilege or work product 327 
protection, such disclosure shall not constitute a waiver of such 328 
privilege or protection. 329 
Sec. 4. (NEW) (Effective October 1, 2024) (a) (1) Beginning on January 330 
1, 2026, each developer shall use reasonable care to protect consumers 331 
from any risk arising from any development or intentional and 332 
substantial modification of a generative artificial intelligence system, to 333 
the extent such risk is known or reasonably foreseeable: 334 
(A) Of any unfair or deceptive trade practice under subsection (a) of 335 
section 42-110b of the general statutes; 336 
(B) Of any unlawful disparate impact on consumers; 337 
(C) Of any emotional, financial, mental, physical or reputational 338 
injury to consumers that may be redressed under the laws of this state; 339 
(D) Of any physical or other intrusion upon the solitude or seclusion, 340 
or the private affairs or concerns, of consumers if such intrusion (i) 341 
would be offensive to a reasonable person, and (ii) may be redressed 342 
under the laws of this state; or 343 
(E) To the intellectual property rights of persons under applicable 344 
state and federal intellectual property laws. 345 
(2) In any enforcement action brought by the Attorney General 346 
pursuant to section 7 of this act on or after January 1, 2026, there shall 347    
Committee Bill No.  2 
 
 
LCO No. 1489   	13 of 37 
 
be a rebuttable presumption that a developer used reasonable care as 348 
required under subdivision (1) of this subsection if the developer 349 
complied with the provisions of this section. 350 
(b) (1) Except as provided in subdivision (2) of this subsection, a 351 
developer that develops, or intentionally and substantially modifies, a 352 
general purpose artificial intelligence model on or after January 1, 2026, 353 
shall: 354 
(A) Reduce and mitigate the known or reasonably foreseeable risks 355 
described in subdivision (1) of subsection (a) of this section through, for 356 
example, the involvement of qualified experts and documentation of 357 
any known or reasonably foreseeable, but nonmitigable, risks; 358 
(B) Incorporate and process datasets that are subject to data 359 
governance measures, including, but not limited to, measures to (i) 360 
examine the suitability of data sources for possible biases and 361 
appropriate mitigation, and (ii) prevent such general purpose artificial 362 
intelligence model from recklessly training on child pornography, as 363 
defined in section 53a-193 of the general statutes; 364 
(C) Achieve, throughout the lifecycle of such general purpose 365 
artificial intelligence model, appropriate levels of performance, 366 
predictability, interpretability, corrigibility, safety and cybersecurity, as 367 
assessed through appropriate methods, including, but not limited to, 368 
model evaluation involving independent experts, documented analysis 369 
and extensive testing, during conceptualization, design and 370 
development of such general purpose artificial intelligence model; and 371 
(D) Incorporate science-backed standards and techniques that (i) 372 
authenticate, detect, label and track the provenance of audio files, 373 
images or videos that are synthetic digital content, where appropriate, 374 
and in a manner that is (I) technically feasible, and (II) informed by the 375 
specificities and limitations of different content types, and (ii) ensure 376 
that such general purpose artificial intelligence model includes 377 
safeguards that are (I) adequate to prevent generation of content in 378    
Committee Bill No.  2 
 
 
LCO No. 1489   	14 of 37 
 
violation of applicable law, including, but not limited to, child 379 
pornography, as defined in section 53a-193 of the general statutes, and 380 
(II) in line with the generally acknowledged state of the art; and 381 
(2) (A) The provisions of subdivision (1) of this subsection shall not 382 
apply to a developer that develops, or intentionally and substantially 383 
modifies, a general purpose artificial intelligence model on or after 384 
January 1, 2026, if: 385 
(i) The developer releases such general purpose artificial intelligence 386 
model under a free and open-source license; and 387 
(ii) Unless such general purpose artificial intelligence model is 388 
deployed as a high-risk artificial intelligence system, the parameters of 389 
such general purpose artificial intelligence model, including, but not 390 
limited to, the weights and information concerning the model 391 
architecture and model usage for such general purpose artificial 392 
intelligence model, are made publicly available. 393 
(B) A developer that takes any action under the exemption 394 
established in subparagraph (A) of this subdivision shall bear the 395 
burden of demonstrating that such action qualifies for such exemption. 396 
(3) A developer that develops, or intentionally and substantially 397 
modifies, a general purpose artificial intelligence model described in 398 
subdivision (1) of this subsection shall maintain all records maintained 399 
for the purposes set forth in this subsection for a period of at least three 400 
years following the final deployment of such general purpose artificial 401 
intelligence model. 402 
(c) (1) Except as provided in subdivisions (3) and (4) of this 403 
subsection, a developer that develops, or intentionally and substantially 404 
modifies, a generative artificial intelligence system on or after January 405 
1, 2026, shall complete an impact assessment for such generative 406 
artificial intelligence system pursuant to this subsection. 407 
(2) Each impact assessment completed pursuant to this subsection 408    
Committee Bill No.  2 
 
 
LCO No. 1489   	15 of 37 
 
shall include, at a minimum, an evaluation of: 409 
(A) The intended purpose and potential benefits of such generative 410 
artificial intelligence system; 411 
(B) Any reasonably foreseeable risk that such generative artificial 412 
intelligence system could (i) harm the health or safety of individuals, or 413 
(ii) result in unlawful discrimination against individuals; 414 
(C) Whether use of such generative artificial intelligence system 415 
could harm the health and safety of individuals or adversely impact the 416 
fundamental rights of individuals; and 417 
(D) The extent to which individuals who may be harmed or adversely 418 
impacted are dependent on the outcomes produced by such generative 419 
artificial intelligence system. 420 
(3) A single impact assessment may address a comparable set of 421 
generative artificial intelligence systems developed, or intentionally and 422 
substantially modified, by a developer. 423 
(4) If a developer completes an impact assessment for the purpose of 424 
complying with another applicable law or regulation, such impact 425 
assessment shall be deemed to satisfy the requirements established in 426 
this subsection if such impact assessment is reasonably similar in scope 427 
and effect to the impact assessment that would otherwise be completed 428 
pursuant to this subsection. 429 
(5) A developer that completes an impact assessment pursuant to this 430 
subsection shall maintain such impact assessment, and all records 431 
concerning such impact assessment, for a period of at least three years 432 
following the final deployment of such generative artificial intelligence 433 
system. 434 
(d) Beginning on January 1, 2026, the Attorney General may require 435 
that a developer disclose to the Attorney General, in a form and manner 436 
prescribed by the Attorney General, any record maintained pursuant to 437    
Committee Bill No.  2 
 
 
LCO No. 1489   	16 of 37 
 
subdivision (3) of subsection (b) of this section, impact assessment 438 
completed pursuant to subsection (c) of this section or record 439 
maintained pursuant to subdivision (5) of subsection (c) of this section 440 
if such impact assessment or record is relevant to an investigation 441 
conducted by the Attorney General. The Attorney General may evaluate 442 
such impact assessment or record to ensure compliance with the 443 
provisions of this section, and such impact assessment or record shall be 444 
exempt from disclosure under the Freedom of Information Act, as 445 
defined in section 1-200 of the general statutes. To the extent any 446 
information contained in any such impact assessment or record includes 447 
any information subject to the attorney-client privilege or work product 448 
protection, such disclosure shall not constitute a waiver of such 449 
privilege or protection. 450 
Sec. 5. (NEW) (Effective October 1, 2024) (a) Except as provided in 451 
subsection (b) of this section, each person doing business in this state, 452 
including, but not limited to, each developer or deployer that develops, 453 
intentionally and substantially modifies, deploys, offers, sells, leases, 454 
licenses, gives or otherwise provides, as applicable, an artificial 455 
intelligence system that is intended to interact with consumers shall 456 
ensure that such artificial intelligence system discloses to each consumer 457 
who interacts with such artificial intelligence system that such consumer 458 
is interacting with an artificial intelligence system. 459 
(b) No disclosure shall be required under subsection (a) of this section 460 
under circumstances in which: 461 
(1) A reasonable person would deem it obvious that such person is 462 
interacting with an artificial intelligence system; or 463 
(2) The developer or deployer did not directly make the artificial 464 
intelligence system available to consumers. 465 
Sec. 6. (NEW) (Effective October 1, 2024) (a) Nothing in sections 1 to 7, 466 
inclusive, of this act shall be construed to restrict a developer's or 467 
deployer's ability to: (1) Comply with federal, state or municipal 468    
Committee Bill No.  2 
 
 
LCO No. 1489   	17 of 37 
 
ordinances or regulations; (2) comply with a civil, criminal or regulatory 469 
inquiry, investigation, subpoena or summons by federal, state, 470 
municipal or other governmental authorities; (3) cooperate with law 471 
enforcement agencies concerning conduct or activity that the developer 472 
or deployer reasonably and in good faith believes may violate federal, 473 
state or municipal ordinances or regulations; (4) investigate, establish, 474 
exercise, prepare for or defend legal claims; (5) take immediate steps to 475 
protect an interest that is essential for the life or physical safety of the 476 
consumer or another individual; (6) prevent, detect, protect against or 477 
respond to security incidents, identity theft, fraud, harassment, 478 
malicious or deceptive activities or any illegal activity, preserve the 479 
integrity or security of systems or investigate, report or prosecute those 480 
responsible for any such action; (7) engage in public or peer-reviewed 481 
scientific or statistical research in the public interest that adheres to all 482 
other applicable ethics and privacy laws and is approved, monitored 483 
and governed by an institutional review board that determines, or by 484 
similar independent oversight entities that determine, (A) that the 485 
expected benefits of the research outweigh the risks associated with 486 
such research, and (B) whether the developer or deployer has 487 
implemented reasonable safeguards to mitigate the risks associated 488 
with such research; or (8) assist another developer or deployer with any 489 
of the obligations imposed under sections 1 to 7, inclusive, of this act. 490 
(b) The obligations imposed on developers or deployers under 491 
sections 1 to 7, inclusive, of this act shall not restrict a developer's or 492 
deployer's ability to: (1) Effectuate a product recall; or (2) identify and 493 
repair technical errors that impair existing or intended functionality. 494 
(c) The obligations imposed on developers or deployers under 495 
sections 1 to 7, inclusive, of this act shall not apply where compliance by 496 
the developer or deployer with said sections would violate an 497 
evidentiary privilege under the laws of this state. 498 
(d) Nothing in sections 1 to 7, inclusive, of this act shall be construed 499 
to impose any obligation on a developer or deployer that adversely 500    
Committee Bill No.  2 
 
 
LCO No. 1489   	18 of 37 
 
affects the rights or freedoms of any person, including, but not limited 501 
to, the rights of any person: (1) To freedom of speech or freedom of the 502 
press guaranteed in the First Amendment to the United States 503 
Constitution; or (2) under section 52-146t of the general statutes. 504 
(e) If a developer or deployer engages in any action pursuant to an 505 
exemption set forth in subsections (a) to (d), inclusive, of this section, 506 
the developer or deployer bears the burden of demonstrating that such 507 
action qualifies for such exemption. 508 
Sec. 7. (NEW) (Effective October 1, 2024) (a) The Attorney General shall 509 
have exclusive authority to enforce the provisions of sections 1 to 6, 510 
inclusive, of this act. 511 
(b) Except as provided in subsection (f) of this section, during the 512 
period beginning on July 1, 2025, and ending on June 30, 2026, the 513 
Attorney General shall, prior to initiating any action for a violation of 514 
any provision of sections 1 to 6, inclusive, of this act, issue a notice of 515 
violation to the developer or deployer if the Attorney General 516 
determines that it is possible to cure such violation. If the developer or 517 
deployer fails to cure such violation not later than sixty days after 518 
receipt of the notice of violation, the Attorney General may bring an 519 
action pursuant to this section. Not later than January 1, 2027, the 520 
Attorney General shall submit a report, in accordance with section 11-521 
4a of the general statutes, to the joint standing committee of the General 522 
Assembly having cognizance of matters relating to consumer protection 523 
disclosing: (1) The number of notices of violation the Attorney General 524 
has issued; (2) the nature of each violation; (3) the number of violations 525 
that were cured during the sixty-day cure period; and (4) any other 526 
matter the Attorney General deems relevant for the purposes of such 527 
report. 528 
(c) Except as provided in subsection (f) of this section, beginning on 529 
July 1, 2026, the Attorney General may, in determining whether to grant 530 
a developer or deployer the opportunity to cure an alleged violation 531 
described in subsection (b) of this section, consider: (1) The number of 532    
Committee Bill No.  2 
 
 
LCO No. 1489   	19 of 37 
 
violations; (2) the size and complexity of the developer or deployer; (3) 533 
the nature and extent of the developer's or deployer's business; (4) the 534 
substantial likelihood of injury to the public; (5) the safety of persons or 535 
property; and (6) whether such alleged violation was likely caused by 536 
human or technical error. 537 
(d) Nothing in sections 1 to 6, inclusive, of this act shall be construed 538 
as providing the basis for a private right of action for violations of said 539 
sections or any other law. 540 
(e) Except as provided in subsection (f) of this section, a violation of 541 
the requirements established in sections 1 to 6, inclusive, of this act shall 542 
constitute an unfair trade practice for purposes of section 42-110b of the 543 
general statutes and shall be enforced solely by the Attorney General, 544 
provided the provisions of section 42-110g of the general statutes shall 545 
not apply to such violation. 546 
(f) (1) In any action commenced by the Attorney General for any 547 
violation of sections 1 to 6, inclusive, of this act, it shall be an affirmative 548 
defense that: 549 
(A) The developer or deployer of the generative artificial intelligence 550 
system or high-risk artificial intelligence system, as applicable, 551 
implemented and maintains a program that is in compliance with: 552 
(i) The latest version of the "Artificial Intelligence Risk Management 553 
Framework" published by the National Institute of Standards and 554 
Technology or another nationally or internationally recognized risk 555 
management framework for artificial intelligence systems; 556 
(ii) Any risk management framework for artificial intelligence 557 
systems that the Attorney General, in the Attorney General's discretion, 558 
may designate; or 559 
(iii) Any risk management framework for artificial intelligence 560 
systems designated by the Banking Commissioner or Insurance 561 
Commissioner if the developer or deployer is regulated by the 562    
Committee Bill No.  2 
 
 
LCO No. 1489   	20 of 37 
 
Department of Banking or Insurance Department; and 563 
(B) The developer or deployer: 564 
(i) Encourages the deployers or users of the generative artificial 565 
intelligence system or high-risk artificial intelligence system, as 566 
applicable, to provide feedback to such developer or deployer; 567 
(ii) Discovers a violation of any provision of sections 1 to 6, inclusive, 568 
of this act (I) as a result of the feedback described in subparagraph (B)(i) 569 
of this subdivision, (II) through adversarial testing or red-teaming, as 570 
such terms are defined or used by the National Institutes of Standards 571 
and Technology, or (III) through an internal review process; and 572 
(iii) Not later than sixty days after discovering the violation as set 573 
forth in subparagraph (B)(ii) of this subdivision, (I) cures such violation, 574 
and (II) provides to the Attorney General, in a form and manner 575 
prescribed by the Attorney General, notice that such violation has been 576 
cured and evidence that any harm caused by such violation has been 577 
mitigated. 578 
(2) The developer or deployer bears the burden of demonstrating to 579 
the Attorney General that the requirements established in subdivision 580 
(1) of this subsection have been satisfied. 581 
Sec. 8. (NEW) (Effective from passage) (a) For the purposes of this 582 
section, "artificial intelligence" means: (1) An artificial system that (A) 583 
performs tasks under varying and unpredictable circumstances without 584 
significant human oversight or can learn from experience and improve 585 
such performance when exposed to datasets, (B) is developed in any 586 
context, including, but not limited to, software or physical hardware, 587 
and solves tasks requiring human-like perception, cognition, planning, 588 
learning, communication or physical action, or (C) is designed to (i) 589 
think or act like a human by using, for example, a cognitive architecture 590 
or neural network, or (ii) act rationally by using, for example, an 591 
intelligent software agent or embodied robot that achieves goals 592    
Committee Bill No.  2 
 
 
LCO No. 1489   	21 of 37 
 
through perception, planning, reasoning, learning, communication, 593 
decision-making or action; and (2) a set of techniques, including, but not 594 
limited to, machine learning, that is designed to approximate a cognitive 595 
task. 596 
(b) There is established an Artificial Intelligence Advisory Council to 597 
engage stakeholders and experts to: (1) Make recommendations 598 
concerning, and develop best practices for, the ethical and equitable use 599 
of artificial intelligence in state government; (2) assess the White House 600 
Office of Science and Technology Policy's "Blueprint for an AI Bill of 601 
Rights" and similar materials and make recommendations concerning 602 
the (A) regulation of the use of artificial intelligence in the private sector 603 
based, among other things, on said blueprint, and (B) adoption of a 604 
Connecticut artificial intelligence bill of rights based on said blueprint; 605 
and (3) make recommendations concerning the adoption of other 606 
legislation concerning artificial intelligence. 607 
(c) (1) (A) The advisory council shall be part of the Legislative 608 
Department and consist of the following voting members: (i) One 609 
appointed by the speaker of the House of Representatives, who shall be 610 
a representative of the industries that are developing artificial 611 
intelligence; (ii) one appointed by the president pro tempore of the 612 
Senate, who shall be a representative of the industries that are using 613 
artificial intelligence; (iii) one appointed by the majority leader of the 614 
House of Representatives, who shall be an academic with a 615 
concentration in the study of technology and technology policy; (iv) one 616 
appointed by the majority leader of the Senate, who shall be an academic 617 
with a concentration in the study of government and public policy; (v) 618 
one appointed by the minority leader of the House of Representatives, 619 
who shall be a representative of an industry association representing the 620 
industries that are developing artificial intelligence; (vi) one appointed 621 
by the minority leader of the Senate, who shall be a representative of an 622 
industry association representing the industries that are using artificial 623 
intelligence; (vii) one appointed by the House chairperson of the joint 624 
standing committee of the General Assembly having cognizance of 625    
Committee Bill No.  2 
 
 
LCO No. 1489   	22 of 37 
 
matters relating to consumer protection; (viii) one appointed by the 626 
Senate chairperson of the joint standing committee of the General 627 
Assembly having cognizance of matters relating to consumer 628 
protection; (ix) two appointed by the Governor, who shall be members 629 
of the Connecticut Academy of Science and Engineering; and (x) the 630 
House and Senate chairpersons of the joint standing committee of the 631 
General Assembly having cognizance of matters relating to consumer 632 
protection. 633 
(B) All voting members appointed pursuant to subparagraphs (A)(i) 634 
to (A)(ix), inclusive, of this subdivision shall have professional 635 
experience or academic qualifications in matters pertaining to artificial 636 
intelligence, automated systems, government policy or another related 637 
field. 638 
(C) All initial appointments to the advisory council under 639 
subparagraphs (A)(i) to (A)(ix), inclusive, of this subdivision shall be 640 
made not later than thirty days after the effective date of this section. 641 
Any vacancy shall be filled by the appointing authority. 642 
(D) Any action taken by the advisory council shall be taken by a 643 
majority vote of all members present who are entitled to vote, provided 644 
no such action may be taken unless at least fifty per cent of such 645 
members are present. 646 
(2) The advisory council shall include the following nonvoting, ex-647 
officio members: (A) The Attorney General, or the Attorney General's 648 
designee; (B) the Comptroller, or the Comptroller's designee; (C) the 649 
Treasurer, or the Treasurer's designee; (D) the Commissioner of 650 
Administrative Services, or said commissioner's designee; (E) the Chief 651 
Data Officer, or said officer's designee; (F) the executive director of the 652 
Freedom of Information Commission, or said executive director's 653 
designee; (G) the executive director of the Commission on Women, 654 
Children, Seniors, Equity and Opportunity, or said executive director's 655 
designee; (H) the Chief Court Administrator, or said administrator's 656 
designee; and (I) the executive director of the Connecticut Academy of 657    
Committee Bill No.  2 
 
 
LCO No. 1489   	23 of 37 
 
Science and Engineering, or said executive director's designee. 658 
(d) The chairpersons of the joint standing committee of the General 659 
Assembly having cognizance of matters relating to consumer protection 660 
shall serve as chairpersons of the advisory council. Such chairpersons 661 
shall schedule the first meeting of the advisory council, which shall be 662 
held not later than sixty days after the effective date of this section. 663 
(e) The administrative staff of the joint standing committee of the 664 
General Assembly having cognizance of matters relating to consumer 665 
protection shall serve as administrative staff of the advisory council. 666 
Sec. 9. Subsection (a) of section 53a-189c of the general statutes is 667 
repealed and the following is substituted in lieu thereof (Effective October 668 
1, 2024): 669 
(a) A person is guilty of unlawful dissemination of an intimate image 670 
when (1) such person intentionally disseminates by electronic or other 671 
means a photograph, film, videotape or other recorded [image] or 672 
synthetic image of (A) the genitals, pubic area or buttocks of another 673 
person with less than a fully opaque covering of such body part, or the 674 
breast of such other person who is female with less than a fully opaque 675 
covering of any portion of such breast below the top of the nipple, or (B) 676 
another person engaged in sexual intercourse, as defined in section 53a-677 
193, (2) such person disseminates such image without the consent of 678 
such other person, knowing that such other person understood that the 679 
image would not be so disseminated, and (3) such other person suffers 680 
harm as a result of such dissemination. For purposes of this subsection, 681 
"disseminate" means to sell, give, provide, lend, trade, mail, deliver, 682 
transfer, publish, distribute, circulate, present, exhibit, advertise or 683 
otherwise offer; [, and] "harm" includes, but is not limited to, subjecting 684 
such other person to hatred, contempt, ridicule, physical injury, 685 
financial injury, psychological harm or serious emotional distress; and 686 
"synthetic image" means an image that is partially or fully generated by 687 
a computer system, and not wholly recorded by a camera. 688    
Committee Bill No.  2 
 
 
LCO No. 1489   	24 of 37 
 
Sec. 10. Section 9-600 of the general statutes is repealed and the 689 
following is substituted in lieu thereof (Effective July 1, 2024): 690 
[This] Except as otherwise provided in section 11 of this act, this 691 
chapter applies to: (1) The election, and all primaries preliminary 692 
thereto, of all public officials, except presidential electors, United States 693 
senators and members in Congress, and (2) any referendum question. 694 
This chapter also applies, except for the provisions of sections 9-611 to 695 
9-620, inclusive, to persons who are candidates in a primary for town 696 
committee members. 697 
Sec. 11. (NEW) (Effective July 1, 2024) (a) As used in this section: 698 
(1) "Artificial intelligence" means a machine-based system that (A) 699 
can, for a given set of human-defined objectives, make predictions, 700 
recommendations or decisions influencing real or virtual environments, 701 
and (B) uses machine and human-based inputs to (i) perceive real and 702 
virtual environments, (ii) abstract such perceptions into models through 703 
analysis in an automated manner, and (iii) formulate options for 704 
information or action through model inference; 705 
(2) "Candidate" means a human being who seeks election, or 706 
nomination for election, to any municipal, federal or state office; 707 
(3) "Deceptive media" means an image, audio or video that (A) 708 
depicts a human being engaging in speech or conduct in which the 709 
human being did not engage, (B) a reasonable viewer or listener would 710 
incorrectly believe depicts such human being engaging in such speech 711 
or conduct, and (C) was produced, in whole or in part, by artificial 712 
intelligence; 713 
(4) "Election" has the same meaning as provided in section 9-1 of the 714 
general statutes; and 715 
(5) "Elector" has the same meaning as provided in section 9-1 of the 716 
general statutes. 717    
Committee Bill No.  2 
 
 
LCO No. 1489   	25 of 37 
 
(b) Except as provided in subsection (c) of this section, no person shall 718 
distribute, or enter into an agreement with another person to distribute, 719 
any deceptive media during the ninety-day period preceding an 720 
election, or any primary precedent thereto, if: 721 
(1) The person (A) knows such deceptive media depicts any human 722 
being engaging in speech or conduct in which such person did not 723 
engage, and (B) in distributing such deceptive media or entering into 724 
such agreement, intends to (i) harm the reputation or electoral prospects 725 
of a candidate in the primary or election, and (ii) change the voting 726 
behavior of electors in the primary or election by deceiving such electors 727 
into incorrectly believing that the human being described in 728 
subparagraph (A) of this subdivision engaged in the speech or conduct 729 
described in said subparagraph; and 730 
(2) It is reasonably foreseeable that the distribution will (A) harm the 731 
reputation or electoral prospects of a candidate in the primary or 732 
election, and (B) change the voting behavior of electors in the primary 733 
or election in the manner set forth in subparagraph (B)(ii) of subdivision 734 
(1) of this subsection. 735 
(c) A person may distribute, or enter into an agreement with another 736 
person to distribute, deceptive media during the ninety-day period 737 
preceding a primary or election if the deceptive media includes a 738 
disclaimer: 739 
(1) Informing viewers or listeners, as applicable, that the media has 740 
been manipulated by technical means and depicts speech or conduct 741 
that did not occur; 742 
(2) If the deceptive media is a video, that (A) appears throughout the 743 
entirety of the video, (B) is clearly visible to, and readable by, the 744 
average viewer, (C) is in letters (i) at least as large as the majority of the 745 
other text included in the video, or (ii) if there is no other text included 746 
in the video, in a size that is easily readable by the average viewer, and 747 
(D) is in the same language otherwise used in such deceptive media; 748    
Committee Bill No.  2 
 
 
LCO No. 1489   	26 of 37 
 
(3) If the deceptive media exclusively consists of audio, that is read 749 
(A) at the beginning and end of the media, (B) in a clearly spoken 750 
manner, (C) in a pitch that can be easily heard by the average listener, 751 
and (D) in the same language otherwise used in such deceptive media; 752 
(4) If the deceptive media is an image, that (A) is clearly visible to, 753 
and readable by, the average viewer, (B) if the media contains other text, 754 
is in letters (i) at least as large as the majority of the other text included 755 
in the image, or (ii) if there is no other text included in the image, in a 756 
size that is easily readable by the average viewer, and (C) is in the same 757 
language otherwise used in such deceptive media; and 758 
(5) If the deceptive media was generated by editing an existing image, 759 
audio or video, that includes a citation directing the viewer or listener 760 
to the original source from which the unedited version of such existing 761 
image, audio or video was obtained. 762 
(d) (1) Any person who violates any provision of this section shall be 763 
guilty of a class C misdemeanor, except that any violation committed 764 
not later than five years after conviction for a prior violation shall be a 765 
class D felony. 766 
(2) Any penalty imposed under subdivision (1) of this subsection 767 
shall be in addition to any injunctive or other equitable relief ordered 768 
under subsection (e) of this section. 769 
(e) (1) The Attorney General, a human being described in 770 
subparagraph (A) of subdivision (1) of subsection (b) of this section or 771 
candidate for office who has been, or is likely to be, injured by the 772 
distribution of deceptive media in violation of the provisions of this 773 
section, or an organization that represents the interests of electors who 774 
have been, or are likely to be, deceived by any such distribution, may 775 
commence a civil action, in a court of competent jurisdiction, seeking to 776 
permanently enjoin any person who is alleged to have committed such 777 
violation from continuing such violation. No court shall have 778 
jurisdiction to grant extraordinary relief in the form of a temporary 779    
Committee Bill No.  2 
 
 
LCO No. 1489   	27 of 37 
 
restraining order or preliminary injunction for any violation of this 780 
section. 781 
(2) In any civil action commenced under subdivision (1) of this 782 
subsection, the plaintiff shall bear the burden of proving, by clear and 783 
convincing evidence, that the defendant distributed deceptive media in 784 
violation of the provisions of this section. 785 
(3) Any party, other than the Attorney General, who prevails in a civil 786 
action commenced under subdivision (1) of this subsection shall be 787 
awarded reasonable attorney's fees and costs to be taxed by the court. 788 
Sec. 12. (Effective from passage) (a) As used in this section: 789 
(1) "Artificial intelligence" means any technology, including, but not 790 
limited to, machine learning, that uses data to train an algorithm or 791 
predictive model for the purpose of enabling a computer system or 792 
service to autonomously perform any task, including, but not limited to, 793 
visual perception, language processing or speech recognition, that is 794 
normally associated with human intelligence or perception; 795 
(2) "Generative artificial intelligence" means any form of artificial 796 
intelligence, including, but not limited to, a foundation model, that is 797 
able to produce synthetic digital content; 798 
(3) "Machine learning" means any technique that enables a computer 799 
system or service to autonomously learn and adapt by using algorithms 800 
and statistical models to autonomously analyze and draw inferences 801 
from patterns in data; and 802 
(4) "State agency" means any department, board, council, 803 
commission, institution or other executive branch agency of state 804 
government, including, but not limited to, each constituent unit and 805 
each public institution of higher education. 806 
(b) Each state agency shall study how generative artificial intelligence 807 
may be incorporated in its processes to improve efficiencies. Each state 808    
Committee Bill No.  2 
 
 
LCO No. 1489   	28 of 37 
 
agency shall solicit input from its employees concerning such 809 
incorporation, including, but not limited to, any applicable collective 810 
bargaining unit that represents its employees and appropriate experts 811 
from civil society organizations, academia and industry. 812 
(c) Not later than January 1, 2025, each state agency shall submit the 813 
results of such study to the Department of Administrative Services, 814 
including a request for approval of any potential pilot project utilizing 815 
generative artificial intelligence that the state agency intends to 816 
establish, provided such use is in accordance with the policies and 817 
procedures established by the Office of Policy and Management 818 
pursuant to subsection (b) of section 4-68jj of the general statutes. Any 819 
such pilot project shall measure how generative artificial intelligence (1) 820 
improves Connecticut residents' experience with and access to 821 
government services, and (2) supports state agency employees in the 822 
performance of their duties in addition to any domain-specific impacts 823 
to be measured by the state agency. The Commissioner of 824 
Administrative Services shall assess any such proposed pilot project in 825 
accordance with the provisions of section 4a-2e of the general statutes, 826 
as amended by this act, and may disapprove any pilot project that fails 827 
such assessment or requires additional legislative authorization. 828 
(d) Not later than February 1, 2025, the Commissioner of 829 
Administrative Services shall submit a report, in accordance with the 830 
provisions of section 11-4a of the general statutes, to the joint standing 831 
committees of the General Assembly having cognizance of matters 832 
relating to consumer protection and government administration. Such 833 
report shall include a summary of all pilot projects approved by the 834 
commissioner under this section and any recommendations for 835 
legislation necessary to implement additional pilot projects. 836 
Sec. 13. Section 4a-2e of the 2024 supplement to the general statutes 837 
is repealed and the following is substituted in lieu thereof (Effective July 838 
1, 2024): 839 
(a) For the purposes of this section: 840    
Committee Bill No.  2 
 
 
LCO No. 1489   	29 of 37 
 
(1) "Artificial intelligence" means (A) an artificial system that (i) 841 
performs tasks under varying and unpredictable circumstances without 842 
significant human oversight or can learn from experience and improve 843 
such performance when exposed to data sets, (ii) is developed in any 844 
context, including, but not limited to, software or physical hardware, 845 
and solves tasks requiring human-like perception, cognition, planning, 846 
learning, communication or physical action, or (iii) is designed to (I) 847 
think or act like a human, including, but not limited to, a cognitive 848 
architecture or neural network, or (II) act rationally, including, but not 849 
limited to, an intelligent software agent or embodied robot that achieves 850 
goals using perception, planning, reasoning, learning, communication, 851 
decision-making or action, or (B) a set of techniques, including, but not 852 
limited to, machine learning, that is designed to approximate a cognitive 853 
task; [and] 854 
(2) "Generative artificial intelligence" means any form of artificial 855 
intelligence, including, but not limited to, a foundation model, that is 856 
able to produce synthetic digital content; and 857 
[(2)] (3) "State agency" has the same meaning as provided in section 858 
4d-1. 859 
(b) (1) Not later than December 31, 2023, and annually thereafter, the 860 
[Department] Commissioner of Administrative Services shall conduct 861 
an inventory of all systems that employ artificial intelligence and are in 862 
use by any state agency. Each such inventory shall include at least the 863 
following information for each such system: 864 
(A) The name of such system and the vendor, if any, that provided 865 
such system; 866 
(B) A description of the general capabilities and uses of such system; 867 
(C) Whether such system was used to independently make, inform or 868 
materially support a conclusion, decision or judgment; and 869 
(D) Whether such system underwent an impact assessment prior to 870    
Committee Bill No.  2 
 
 
LCO No. 1489   	30 of 37 
 
implementation. 871 
(2) The [Department] Commissioner of Administrative Services shall 872 
make each inventory conducted pursuant to subdivision (1) of this 873 
subsection publicly available on the state's open data portal. 874 
(c) Beginning on February 1, 2024, the [Department] Commissioner 875 
of Administrative Services shall perform ongoing assessments of 876 
systems that employ artificial intelligence and are in use by state 877 
agencies to ensure that no such system shall result in any unlawful 878 
discrimination or disparate impact described in subparagraph (B) of 879 
subdivision (1) of subsection (b) of section 4-68jj. The [department] 880 
commissioner shall perform such assessment in accordance with the 881 
policies and procedures established by the Office of Policy and 882 
Management pursuant to subsection (b) of section 4-68jj. 883 
(d) The Commissioner of Administrative Services shall, in 884 
consultation with other state agencies, collective bargaining units that 885 
represent state agency employees and industry experts, develop 886 
trainings for state agency employees on (1) the use of generative 887 
artificial intelligence tools that are determined by the commissioner, 888 
pursuant to the assessment performed under subsection (c) of this 889 
section, to achieve equitable outcomes, and (2) methods for identifying 890 
and mitigating potential output inaccuracies, fabricated text, 891 
hallucinations and biases of generative artificial intelligence while 892 
respecting the privacy of the public and complying with all applicable 893 
state laws and policies. Beginning on July 1, 2025, the commissioner 894 
shall make such trainings available to state agency employees not less 895 
than annually. 896 
Sec. 14. Subsection (b) of section 4-124w of the 2024 supplement to the 897 
general statutes is repealed and the following is substituted in lieu 898 
thereof (Effective July 1, 2024): 899 
(b) The department head of the Office of Workforce Strategy shall be 900 
the Chief Workforce Officer, who shall be appointed by the Governor in 901    
Committee Bill No.  2 
 
 
LCO No. 1489   	31 of 37 
 
accordance with the provisions of sections 4-5 to 4-8, inclusive, with the 902 
powers and duties therein prescribed. The Chief Workforce Officer shall 903 
be qualified by training and experience to perform the duties of the 904 
office as set forth in this section and shall have knowledge of publicly 905 
funded workforce training programs. The Chief Workforce Officer shall: 906 
(1) Be the principal advisor for workforce development policy, 907 
strategy and coordination to the Governor; 908 
(2) Be the lead state official for the development of employment and 909 
training strategies and initiatives; 910 
(3) Be the chairperson of the Workforce Cabinet, which shall consist 911 
of agencies involved with employment and training, as designated by 912 
the Governor pursuant to section 31-3m. The Workforce Cabinet shall 913 
meet at the direction of the Governor or the Chief Workforce Officer; 914 
(4) Be the liaison between the Governor, the Governor's Workforce 915 
Council, established pursuant to section 31-3h and any local, regional, 916 
state or federal organizations and entities with respect to workforce 917 
development policy, strategy and coordination, including, but not 918 
limited to, implementation of the Workforce Innovation and 919 
Opportunity Act of 2014, P.L. 113-128, as amended from time to time; 920 
(5) Develop, and update as necessary, a state workforce strategy in 921 
consultation with the Governor's Workforce Council and the Workforce 922 
Cabinet and subject to the approval of the Governor. The Chief 923 
Workforce Officer shall submit, in accordance with the provisions of 924 
section 11-4a, the state workforce strategy to the joint standing 925 
committees of the General Assembly having cognizance of matters 926 
relating to appropriations, commerce, education, higher education and 927 
employment advancement, and labor and public employees at least 928 
thirty days before submitting such state workforce strategy to the 929 
Governor for his or her approval; 930 
(6) Coordinate workforce development activities (A) funded through 931    
Committee Bill No.  2 
 
 
LCO No. 1489   	32 of 37 
 
state resources, (B) funded through funds received pursuant to the 932 
Workforce Innovation and Opportunity Act of 2014, P.L. 113-128, as 933 
amended from time to time, or (C) administered in collaboration with 934 
any state agency for the purpose of furthering the goals and outcomes 935 
of the state workforce strategy approved by the Governor pursuant to 936 
subdivision (5) of this subsection and the workforce development plan 937 
developed by the Governor's Workforce Council pursuant to the 938 
provisions of section 31-11p; 939 
(7) Collaborate with the regional workforce development boards to 940 
adapt the best practices for workforce development established by such 941 
boards for state-wide implementation, if possible; 942 
(8) Coordinate measurement and evaluation of outcomes across 943 
education and workforce development programs, in conjunction with 944 
state agencies, including, but not limited to, the Labor Department, the 945 
Department of Education and the Office of Policy and Management; 946 
(9) Notwithstanding any provision of the general statutes, review any 947 
state plan for each program set forth in Section 103(b) of the Workforce 948 
Innovation and Opportunity Act of 2014, P.L. 113-128, as amended from 949 
time to time, before such plan is submitted to the Governor; 950 
(10) Establish methods and procedures to ensure the maximum 951 
involvement of members of the public, the legislature and local officials 952 
in workforce development policy, strategy and coordination; 953 
(11) In conjunction with one or more state agencies enter into such 954 
contractual agreements, in accordance with established procedures and 955 
the approval of the Secretary of the Office of Policy and Management, 956 
as may be necessary to carry out the provisions of this section. The Chief 957 
Workforce Officer may enter into agreements with other state agencies 958 
for the purpose of performing the duties of the Office of Workforce 959 
Strategy, including, but not limited to, administrative, human resources, 960 
finance and information technology functions; 961    
Committee Bill No.  2 
 
 
LCO No. 1489   	33 of 37 
 
(12) Market and communicate the state workforce strategy to ensure 962 
maximum engagement with students, trainees, job seekers and 963 
businesses while effectively elevating the state's workforce profile 964 
nationally; 965 
(13) For the purposes of subsection (a) of section 10-21c identify 966 
subject areas, courses, curriculum, content and programs that may be 967 
offered to students in elementary and high school in order to improve 968 
student outcomes and meet the workforce needs of the state; 969 
(14) Issue guidance to state agencies, the Governor's Workforce 970 
Council and regional workforce development boards in furtherance of 971 
the state workforce strategy and the workforce development plan 972 
developed by the Governor's Workforce Council pursuant to the 973 
provisions of section 31-11p. Such guidance shall be approved by the 974 
Secretary of the Office of Policy and Management, allow for a reasonable 975 
period for implementation and take effect not less than thirty days from 976 
such approval. The Chief Workforce Officer shall consult on the 977 
development and implementation of any guidance with the agency, 978 
council or board impacted by such guidance; 979 
(15) Coordinate, in consultation with the Labor Department and 980 
regional workforce development boards to ensure compliance with 981 
state and federal laws for the purpose of furthering the service 982 
capabilities of programs offered pursuant to the Workforce Innovation 983 
and Opportunity Act, P.L. 113-128, as amended from time to time, and 984 
the United States Department of Labor's American Job Center system; 985 
(16) Coordinate, in consultation with the Department of Social 986 
Services, with community action agencies to further the state workforce 987 
strategy; [and] 988 
(17) In consultation with the regional workforce development boards 989 
established under section 31-3k, the Department of Economic and 990 
Community Development and other relevant state agencies, incorporate 991 
training concerning artificial intelligence, as defined in section 1 of this 992    
Committee Bill No.  2 
 
 
LCO No. 1489   	34 of 37 
 
act, into workforce training programs offered in this state; 993 
(18) In consultation with the Department of Economic and 994 
Community Development, the Connecticut Academy of Science and 995 
Engineering and broadband Internet access service providers, as 996 
defined in section 16-330a, design an outreach program for the purpose 997 
of promoting access to broadband Internet access service, as defined in 998 
said section, in underserved communities in this state, and identify a 999 
nonprofit organization to implement and lead such outreach program 1000 
under the supervision of the Chief Workforce Officer, the Department 1001 
of Economic and Community Development and the Connecticut 1002 
Academy of Science and Engineering; and 1003 
[(17)] (19) Take any other action necessary to carry out the provisions 1004 
of this section. 1005 
Sec. 15. (NEW) (Effective July 1, 2024) Not later than July 1, 2025, the 1006 
Board of Regents for Higher Education shall establish, on behalf of 1007 
Charter Oak State College, a "Connecticut Citizens AI Academy" for the 1008 
purpose of curating and offering online courses concerning artificial 1009 
intelligence and the responsible use of artificial intelligence. The board 1010 
shall, in consultation with Charter Oak State College, develop 1011 
certificates and badges to be awarded to persons who successfully 1012 
complete such courses. As used in this section, "artificial intelligence" 1013 
has the same meaning as provided in section 1 of this act. 1014 
Sec. 16. (NEW) (Effective July 1, 2024) (a) As used in this section: 1015 
(1) "Artificial intelligence" has the same meaning as provided in 1016 
section 1 of this act; 1017 
(2) "Generative artificial intelligence system" has the same meaning 1018 
as provided in section 1 of this act; and 1019 
(3) "Prompt engineering" means the process of guiding a generative 1020 
artificial intelligence system to generate a desired output. 1021    
Committee Bill No.  2 
 
 
LCO No. 1489   	35 of 37 
 
(b) Not later than July 1, 2025, the Board of Regents for Higher 1022 
Education shall establish, on behalf of the regional community-technical 1023 
colleges, certificate programs in prompt engineering, artificial 1024 
intelligence marketing for small businesses and artificial intelligence for 1025 
small business operations. 1026 
Sec. 17. (Effective July 1, 2024) Not later than December 31, 2024, the 1027 
Department of Economic and Community Development shall: 1028 
(1) In collaboration with The University of Connecticut and the 1029 
Connecticut State Colleges and Universities, develop a plan to offer 1030 
high-performance computing services to businesses and researchers in 1031 
this state; 1032 
(2) In collaboration with The University of Connecticut, establish a 1033 
confidential computing cluster for businesses and researchers in this 1034 
state; and 1035 
(3) In collaboration with industry and academia, conduct a "CT AI 1036 
Symposium" to foster collaboration between academia, government and 1037 
industry for the purpose of promoting the establishment and growth of 1038 
artificial intelligence businesses in this state. 1039 
This act shall take effect as follows and shall amend the following 
sections: 
 
Section 1 October 1, 2024 New section 
Sec. 2 October 1, 2024 New section 
Sec. 3 October 1, 2024 New section 
Sec. 4 October 1, 2024 New section 
Sec. 5 October 1, 2024 New section 
Sec. 6 October 1, 2024 New section 
Sec. 7 October 1, 2024 New section 
Sec. 8 from passage New section 
Sec. 9 October 1, 2024 53a-189c(a) 
Sec. 10 July 1, 2024 9-600 
Sec. 11 July 1, 2024 New section 
Sec. 12 from passage New section    
Committee Bill No.  2 
 
 
LCO No. 1489   	36 of 37 
 
Sec. 13 July 1, 2024 4a-2e 
Sec. 14 July 1, 2024 4-124w(b) 
Sec. 15 July 1, 2024 New section 
Sec. 16 July 1, 2024 New section 
Sec. 17 July 1, 2024 New section 
 
Statement of Purpose:   
To: (1) Establish requirements concerning the development, 
deployment and use of certain artificial intelligence systems; (2) 
establish an Artificial Intelligence Advisory Council; (3) prohibit 
dissemination of certain synthetic images; (4) prohibit distribution of, 
and agreements to distribute, certain deceptive media concerning 
elections; (5) require state agencies to study potential uses of generative 
artificial intelligence and propose pilot projects; (6) require the 
Commissioner of Administrative Services to provide training 
concerning generative artificial intelligence; (7) require the Chief 
Workforce Officer to (A) incorporate artificial intelligence training into 
workforce training programs, and (B) design a broadband outreach 
program; (8) require the Board of Regents for Higher Education to 
establish (A) a "Connecticut Citizens AI Academy", and (B) certificate 
programs in fields related to artificial intelligence; and (9) require the 
Department of Economic and Community Development to (A) develop 
a plan to offer high-performance computing services, (B) establish a 
confidential computing cluster, and (C) conduct a "CT AI Symposium". 
 
[Proposed deletions are enclosed in brackets. Proposed additions are indicated by underline, except 
that when the entire text of a bill or resolution or a section of a bill or resolution is new, it is not 
underlined.] 
 
Co-Sponsors:  SEN. LOONEY, 11th Dist.; SEN. DUFF, 25th Dist. 
SEN. ANWAR, 3rd Dist.; SEN. CABRERA, 17th Dist. 
SEN. FLEXER, 29th Dist.; SEN. GASTON, 23rd Dist. 
SEN. HARTLEY, 15th Dist.; SEN. HOCHADEL, 13th Dist. 
SEN. KUSHNER, 24th Dist.; SEN. LESSER, 9th Dist. 
SEN. MAHER, 26th Dist.; SEN. MARONEY, 14th Dist. 
SEN. MARX, 20th Dist.; SEN. MCCRORY, 2nd Dist. 
SEN. MILLER P., 27th Dist.; SEN. MOORE, 22nd Dist. 
SEN. NEEDLEMAN, 33rd Dist.; SEN. OSTEN, 19th Dist. 
SEN. RAHMAN, 4th Dist.; SEN. SLAP, 5th Dist. 
SEN. WINFIELD, 10th Dist.  
    
Committee Bill No.  2 
 
 
LCO No. 1489   	37 of 37 
 
S.B. 2