Texas 2025 - 89th Regular

Texas House Bill HB1709 Compare Versions

OldNewDifferences
11 By: Capriglione H.B. No. 1709
2+
3+
24
35
46 A BILL TO BE ENTITLED
57 AN ACT
68 relating to the regulation and reporting on the use of artificial
79 intelligence systems by certain business entities and state
810 agencies; providing civil penalties.
911 BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:
1012 SECTION 1. This Act may be cited as the Texas Responsible
1113 Artificial Intelligence Governance Act
1214 SECTION 2. Title 11, Business & Commerce Code, is amended by
1315 adding Subtitle D to read as follows:
1416 SUBTITLE D. ARTIFICIAL INTELLIGENCE PROTECTION
1517 CHAPTER 551. ARTIFICIAL INTELLIGENCE PROTECTION
1618 SUBCHAPTER A. GENERAL PROVISIONS
1719 Sec. 551.001. DEFINITIONS. In this chapter:
1820 (1) "Algorithmic discrimination" means any condition
1921 in which an artificial intelligence system when deployed creates an
2022 unlawful discrimination of a protected classification in violation
2123 of the laws of this state or federal law.
2224 (A) "Algorithmic discrimination" does not
2325 include the offer, license, or use of a high-risk artificial
2426 intelligence system by a developer or deployer for the sole purpose
2527 of the developer's or deployer's self-testing, for a non-deployed
2628 purpose, to identify, mitigate, or prevent discrimination or
2729 otherwise ensure compliance with state and federal law.
2830 (2) "Artificial intelligence system" means the use of
2931 machine learning and related technologies that use data to train
3032 statistical models for the purpose of enabling computer systems to
3133 perform tasks normally associated with human intelligence or
3234 perception, such as computer vision, speech or natural language
3335 processing, and content generation.
3436 (3) "Biometric identifier" means a retina or iris
3537 scan, fingerprint, voiceprint, or record of hand or face geometry.
3638 (4) "Council" means the Artificial Intelligence
3739 Council established under Chapter 553.
3840 (5) "Consequential decision" means any decision that
3941 has a material, legal, or similarly significant, effect on a
4042 consumer's access to, cost of, or terms or conditions of:
4143 (A) a criminal case assessment, a sentencing or
4244 plea agreement analysis, or a pardon, parole, probation, or release
4345 decision;
4446 (B) education enrollment or an education
4547 opportunity;
4648 (C) employment or an employment opportunity;
4749 (D) a financial service;
4850 (E) an essential government service;
4951 (F) residential utility services;
5052 (G) a health-care service or treatment;
5153 (H) housing;
5254 (I) insurance;
5355 (J) a legal service;
5456 (K) a transportation service;
5557 (L) constitutionally protected services or
5658 products; or
5759 (M) elections or voting process.
5860 (6) "Consumer" means an individual who is a resident
5961 of this state acting only in an individual or household context.
6062 The term does not include an individual acting in a commercial or
6163 employment context.
6264 (7) "Deploy" means to put into effect or
6365 commercialize.
6466 (8) "Deployer" means a person doing business in this
6567 state that deploys a high-risk artificial intelligence system.
6668 (9) "Developer" means a person doing business in this
6769 state that develops a high-risk artificial intelligence system or
6870 substantially or intentionally modifies an artificial intelligence
6971 system.
7072 (10) "Digital service" means a website, an
7173 application, a program, or software that collects or processes
7274 personal identifying information with Internet connectivity.
7375 (11) "Digital service provider" means a person who:
7476 (A) owns or operates a digital service;
7577 (B) determines the purpose of collecting and
7678 processing the personal identifying information of users of the
7779 digital service; and
7880 (C) determines the means used to collect and
7981 process the personal identifying information of users of the
8082 digital service.
8183 (12) "Distributor" means a person, other than the
8284 Developer, that makes an artificial intelligence system available
8385 in the market for a commercial purpose.
8486 (13) "Generative artificial intelligence" means
8587 artificial intelligence models that can emulate the structure and
8688 characteristics of input data in order to generate derived
8789 synthetic content. This can include images, videos, audio, text,
8890 and other digital content.
8991 (14) "High-risk artificial intelligence system" means
9092 any artificial intelligence system that is a substantial factor to
9193 a consequential decision. The term does not include:
9294 (A) an artificial intelligence system if the
9395 artificial intelligence system is intended to detect
9496 decision-making patterns or deviations from prior decision-making
9597 patterns and is not intended to replace or influence a previously
9698 completed human assessment without sufficient human review;
9799 (B) an artificial intelligence system that
98100 violates a provision of Subchapter B; or
99101 (C) the following technologies, unless the
100102 technologies, when deployed, make, or are a substantial factor in
101103 making, a consequential decision:
102104 (i) anti-malware;
103105 (ii) anti-virus;
104106 (iii) calculators;
105107 (iv) cybersecurity;
106108 (v) databases;
107109 (vi) data storage;
108110 (vii) firewall;
109111 (viii) fraud detection systems;
110112 (ix) internet domain registration;
111113 (x) internet website loading;
112114 (xi) networking;
113115 (xii) operational technology;
114116 (xiii) spam- and robocall-filtering;
115117 (xiv) spell-checking;
116118 (xv) spreadsheets;
117119 (xvi) web caching;
118120 (xvii) web scraping;
119121 (xviii) web hosting or any similar
120122 technology; or
121123 (xviv) any technology that solely
122124 communicates in natural language for the sole purpose of providing
123125 users with information, making referrals or recommendations
124126 relating to customer service, and answering questions and is
125127 subject to an acceptable use policy that prohibits generating
126128 content that is discriminatory or harmful, as long as the system
127129 does not violate any provision listed in Subchapter B.
128130 (15) "Open source artificial intelligence system"
129131 means an artificial intelligence system that:
130132 (A) can be used or modified for any purpose
131133 without securing permission from the owner or creator of such an
132134 artificial intelligence system;
133135 (B) can be shared for any use with or without
134136 modifications; and
135137 (C) includes information about the data used to
136138 train such system that is sufficiently detailed such that a person
137139 skilled in artificial intelligence could create a substantially
138140 equivalent system when the following are made available freely or
139141 through a non-restrictive license:
140142 (i) the same or similar data;
141143 (ii) the source code used to train and run
142144 such system; and
143145 (iii) the model weights and parameters of
144146 such system.
145147 (16) "Operational technology" means hardware and
146148 software that detects or causes a change through the direct
147149 monitoring or control of physical devices, processes, and events in
148150 the enterprise.
149151 (17) "Personal data" has the meaning assigned to it by
150152 Section 541.001, Business and Commerce Code.
151153 (18) "Risk" means the composite measure of an event's
152154 probability of occurring and the magnitude or degree of the
153155 consequences of the corresponding event.
154156 (19) "Sensitive personal attribute" means race,
155157 political opinions, religious or philosophical beliefs, ethnic
156158 orientation, mental health diagnosis, or sex. The term does not
157159 include conduct that would be classified as an offense under
158160 Chapter 21, Penal Code.
159161 (20) "Social media platform" has the meaning assigned
160162 by Section 120.001, Business and Commerce Code.
161163 (21) "Substantial factor" means a factor that is:
162164 (A) considered when making a consequential
163165 decision;
164166 (B) likely to alter the outcome of a
165167 consequential decision; and
166168 (C) weighed more heavily than any other factor
167169 contributing to the consequential decision.
168170 (22) "Intentional and substantial modification" or
169171 "Substantial modification" means a deliberate change made to an
170172 artificial intelligence system that reasonably increases the risk
171173 of algorithmic discrimination.
172174 Sec. 551.002. APPLICABILITY OF CHAPTER. This chapter
173175 applies only to a person that is not a small business as defined by
174176 the United States Small Business Administration, and:
175177 (1) conducts business, promotes, or advertises in this
176178 state or produces a product or service consumed by residents of this
177179 state; or
178180 (2) engages in the development, distribution, or
179181 deployment of a high-risk artificial intelligence system in this
180182 state.
181183 Sec. 551.003. DEVELOPER DUTIES. (a) A developer of a
182184 high-risk artificial intelligence system shall use reasonable care
183185 to protect consumers from any known or reasonably foreseeable risks
184186 of algorithmic discrimination arising from the intended and
185187 contracted uses of the high-risk artificial intelligence system.
186188 (b) Prior to providing a high-risk artificial intelligence
187189 system to a deployer, a developer shall provide to the deployer, in
188190 writing, a High-Risk Report that consists of:
189191 (1) a statement describing how the high-risk
190192 artificial intelligence system should be used or not be used;
191193 (2) any known limitations of the system that could
192194 lead to algorithmic discrimination, the metrics used to measure the
193195 system's performance, which shall include at a minimum, metrics
194196 related to accuracy, explainability, transparency, reliability,
195197 and security set forth in the most recent version of the "Artificial
196198 Intelligence Risk Management Framework: Generative Artificial
197199 Intelligence Profile" published by the National Institute of
198200 Standards and Technology, and how the system performs under those
199201 metrics in its intended use contexts;
200202 (3) any known or reasonably foreseeable risks of
201203 algorithmic discrimination, arising from its intended or likely
202204 use;
203205 (4) a high-level summary of the type of data used to
204206 program or train the high-risk artificial intelligence system;
205207 (5) the data governance measures used to cover the
206208 training datasets and their collection, and the measures used to
207209 examine the suitability of data sources and prevent unlawful
208210 discriminatory biases; and
209211 (6) appropriate principles, processes, and personnel
210212 for the deployers' risk management policy.
211213 (c) If a high-risk artificial intelligence system is
212214 intentionally or substantially modified after a developer provides
213215 it to a deployer, a developer shall make necessary information in
214216 subsection (b) available to deployers within 30 days of the
215217 modification.
216218 (d) If a developer believes or has reason to believe, that
217219 it deployed a high-risk artificial intelligence system that does
218220 not comply with a requirement of this chapter, the developer shall
219221 immediately take the necessary corrective actions to bring that
220222 system into compliance, including by withdrawing it, disabling it,
221223 and recalling it, as appropriate. Where applicable, the developer
222224 shall inform the distributors or deployers of the high-risk
223225 artificial intelligence system concerned.
224226 (e) Where the high-risk artificial intelligence system
225227 presents risks of algorithmic discrimination, unlawful use or
226228 disclosure of personal data, or deceptive manipulation or coercion
227229 of human behavior and the developer knows or should reasonably know
228230 of that risk, it shall immediately investigate the causes, in
229231 collaboration with the deployer, where applicable, and inform the
230232 attorney general in writing of the nature of the non-compliance and
231233 of any relevant corrective action taken.
232234 (f) Developers shall keep detailed records of any
233235 generative artificial intelligence training data used to develop a
234236 generative artificial intelligence system or service, consistent
235237 with the suggested actions under GV-1.2-007 of the "Artificial
236238 Intelligence Risk Management Framework: Generative Artificial
237239 Intelligence Profile" by the National Institute of Standards and
238240 Technology, or any subsequent versions thereof.
239241 Sec. 551.004. DISTRIBUTOR DUTIES. A distributor of a
240242 high-risk artificial intelligence system shall use reasonable care
241243 to protect consumers from any known or reasonably foreseeable risks
242244 of algorithmic discrimination. If a distributor of a high-risk
243245 artificial intelligence system knows or has reason to know that a
244246 high-risk artificial intelligence system is not in compliance with
245247 any requirement in this chapter, it shall immediately withdraw,
246248 disable, or recall as appropriate, the high-risk artificial
247249 intelligence system from the market until the system has been
248250 brought into compliance with the requirements of this chapter. The
249251 distributor shall inform the developers of the high-risk artificial
250252 intelligence system concerned and, where applicable, the
251253 deployers.
252254 Sec. 551.005. DEPLOYER DUTIES. A deployer of a high-risk
253255 artificial intelligence system shall use reasonable care to protect
254256 consumers from any known or reasonably foreseeable risks of
255257 algorithmic discrimination. If a deployer of a high-risk
256258 artificial intelligence system knows or has reason to know that a
257259 high-risk artificial intelligence system is not in compliance with
258260 any requirement in this chapter, it shall immediately suspend the
259261 use of the high-risk artificial intelligence system from the market
260262 until the system has been brought into compliance with the
261263 requirements of this chapter. The deployer shall inform the
262264 developers of the high-risk artificial intelligence system
263265 concerned and, where applicable, the distributors.
264266 Sec. 551.006. IMPACT ASSESSMENTS. (a) A deployer that
265267 deploys a high-risk artificial intelligence system shall complete
266268 an impact assessment for the high-risk artificial intelligence
267269 system. A deployer, or a third-party contracted by the deployer for
268270 such purposes, shall complete an impact assessment annually and
269271 within ninety days after any intentional and substantial
270272 modification to the high-risk artificial intelligence system is
271273 made available. An impact assessment must include, at a minimum,
272274 and to the extent reasonably known by or available to the deployer:
273275 (1) a statement by the deployer disclosing the
274276 purpose, intended use cases, and deployment context of, and
275277 benefits afforded by, the high-risk artificial intelligence
276278 system;
277279 (2) an analysis of whether the deployment of the
278280 high-risk artificial intelligence system poses any known or
279281 reasonably foreseeable risks of algorithmic discrimination and, if
280282 so, the nature of the algorithmic discrimination and the steps that
281283 have been taken to mitigate the risks;
282284 (3) a description of the categories of data the
283285 high-risk artificial intelligence system processes as inputs and
284286 the outputs the high-risk artificial intelligence system produces;
285287 (4) if the deployer used data to customize the
286288 high-risk artificial intelligence system, an overview of the
287289 categories of data the deployer used to customize the high-risk
288290 artificial intelligence system;
289291 (5) any metrics used to evaluate the performance and
290292 known limitations of the high-risk artificial intelligence system;
291293 (6) a description of any transparency measures taken
292294 concerning the high-risk artificial intelligence system, including
293295 any measures taken to disclose to a consumer that the high-risk
294296 artificial intelligence system will be used;
295297 (7) a description of the post-deployment monitoring
296298 and user safeguards provided concerning the high-risk artificial
297299 intelligence system, including the oversight, use, and learning
298300 process established by the deployer to address issues arising from
299301 the deployment of the high-risk artificial intelligence system; and
300302 (8) a description of cybersecurity measures and threat
301303 modeling conducted on the system.
302304 (b) Following an intentional and substantial modification
303305 to a high-risk artificial intelligence system, a deployer must
304306 disclose the extent to which the high-risk artificial intelligence
305307 system was used in a manner that was consistent with, or varied
306308 from, the developer's intended uses of the high-risk artificial
307309 intelligence system.
308310 (c) A single impact assessment may address a comparable set
309311 of high-risk artificial intelligence systems deployed by a
310312 deployer.
311313 (d) A deployer shall maintain the most recently completed
312314 impact assessment for a high-risk artificial intelligence system,
313315 all records concerning each impact assessment, and all prior impact
314316 assessments, if any, for at least three years following the final
315317 deployment of the high-risk artificial intelligence system.
316318 (e) If a deployer, or a third party contracted by the
317319 deployer, completes an impact assessment for the purpose of
318320 complying with another applicable law or regulation, such impact
319321 assessment shall be deemed to satisfy the requirements established
320322 in this subsection if such impact assessment is reasonably similar
321323 in scope and effect to the impact assessment that would otherwise be
322324 completed pursuant to this subsection.
323325 (f) A deployer may redact any trade secrets as defined by
324326 Section 541.001(33), Business & Commerce Code or information
325327 protected from disclosure by state or federal law.
326328 (g) Except as provided in subsection (e) of this section, a
327329 developer that makes a high-risk artificial intelligence system
328330 available to a deployer shall make available to the deployer the
329331 documentation and information necessary for a deployer to complete
330332 an impact assessment pursuant to this section.
331333 (h) A developer that also serves as a deployer for a
332334 high-risk artificial intelligence system is not required to
333335 generate and store an impact assessment unless the high-risk
334336 artificial intelligence system is provided to an unaffiliated
335337 deployer.
336338 Sec. 551.007. DISCLOSURE OF A HIGH-RISK ARTIFICIAL
337339 INTELLIGENCE SYSTEM TO CONSUMERS. (a) A deployer or developer that
338340 deploys, offers, sells, leases, licenses, gives, or otherwise makes
339341 available a high-risk artificial intelligence system that is
340342 intended to interact with consumers shall disclose to each
341343 consumer, before or at the time of interaction:
342344 (1) that the consumer is interacting with an
343345 artificial intelligence system;
344346 (2) the purpose of the system;
345347 (3) that the system may or will make a consequential
346348 decision affecting the consumer;
347349 (4) the nature of any consequential decision in which
348350 the system is or may be a substantial factor;
349351 (5) the factors to be used in making any consequential
350352 decisions;
351353 (6) contact information of the deployer;
352354 (7) a description of:
353355 (A) any human components of the system;
354356 (B) any automated components of the system; and
355357 (C) how human and automated components are used
356358 to inform a consequential decision; and
357359 (8) a declaration of the consumer's rights under
358360 Section 551.108.
359361 (b) Disclosure is required under subsection (a) of this
360362 section regardless of whether it would be obvious to a reasonable
361363 person that the person is interacting with an artificial
362364 intelligence system.
363365 (c) All disclosures under subsection (a) shall be clear and
364366 conspicuous and written in plain language, and avoid the use of a
365367 dark pattern as defined by 541.001, Business & Commerce Code.
366368 (d) All disclosures under subsection (a) may be linked to a
367369 separate webpage of the developer or deployer.
368370 (e) Any requirement in this section that may conflict with
369371 state or federal law may be exempt.
370372 Sec. 551.008. RISK IDENTIFICATION AND MANAGEMENT POLICY.
371373 (a) A developer or deployer of a high-risk artificial intelligence
372374 system shall, prior to deployment, assess potential risks of
373375 algorithmic discrimination and implement a risk management policy
374376 to govern the development or deployment of the high-risk artificial
375377 intelligence system. The risk management policy shall:
376378 (1) specify and incorporate the principles and
377379 processes that the developer or deployer uses to identify,
378380 document, and mitigate, in the development or deployment of a
379381 high-risk artificial intelligence system:
380382 (A) known or reasonably foreseeable risks of
381383 algorithmic discrimination; and
382384 (B) prohibited uses and unacceptable risks under
383385 Subchapter B; and
384386 (2) be reasonable in size, scope, and breadth,
385387 considering:
386388 (A) guidance and standards set forth in the most
387389 recent version of the "Artificial Intelligence Risk Management
388390 Framework: Generative Artificial Intelligence Profile" published
389391 by the National Institute of Standards and Technology;
390392 (B) any existing risk management guidance,
391393 standards or framework applicable to artificial intelligence
392394 systems designated by the Banking Commissioner or Insurance
393395 Commissioner, if the developer or deployer is regulated by the
394396 Department of Banking or Department of Insurance;
395397 (C) the size and complexity of the developer or
396398 deployer;
397399 (D) the nature, scope, and intended use of the
398400 high-risk artificial intelligence systems developed or deployed;
399401 and
400402 (E) the sensitivity and volume of personal data
401403 processed in connection with the high-risk artificial intelligence
402404 systems.
403405 (b) A risk management policy implemented pursuant to this
404406 section may apply to more than one high-risk artificial
405407 intelligence system developed or deployed, so long as the developer
406408 or deployer complies with all of the forgoing requirements and
407409 considerations in adopting and implementing the risk management
408410 policy with respect to each high-risk artificial intelligence
409411 system covered by the policy.
410412 (c) A developer or deployer may redact or omit any trade
411413 secrets as defined by Section 541.001(33), Business & Commerce Code
412414 or information protected from disclosure by state or federal law.
413415 Sec. 551.009. RELATIONSHIPS BETWEEN ARTIFICIAL
414416 INTELLIGENCE PARTIES. Any distributor or deployer, shall be
415417 considered to be a developer of a high-risk artificial intelligence
416418 system for the purposes of this chapter and shall be subject to the
417419 obligations and duties of a developer under this chapter in any of
418420 the following circumstances:
419421 (1) they put their name or trademark on a high-risk
420422 artificial intelligence system already placed in the market or put
421423 into service;
422424 (2) they intentionally and substantially modify a
423425 high-risk artificial intelligence system that has already been
424426 placed in the market or has already been put into service in such a
425427 way that it remains a high-risk artificial intelligence system
426428 under this chapter; or
427429 (3) they modify the intended purpose of an artificial
428430 intelligence system which has not previously been classified as
429431 high-risk and has already been placed in the market or put into
430432 service in such a way that the artificial intelligence system
431433 concerned becomes a high-risk artificial intelligence system in
432434 accordance with this chapter of a high-risk artificial intelligence
433435 system.
434436 Sec. 551.010. DIGITAL SERVICE PROVIDER AND SOCIAL MEDIA
435437 PLATFORM DUTIES REGARDING ARTIFICIAL INTELLIGENCE SYSTEMS. A
436438 digital service provider as defined by Section 509.001(2), Business &
437439 Commerce Code or a social media platform as defined by Section
438440 120.001(1), Business & Commerce Code, shall require advertisers on
439441 the service or platform to agree to terms preventing the deployment
440442 of a high-risk artificial intelligence system on the service or
441443 platform that could expose the users of the service or platform to
442444 algorithmic discrimination or prohibited uses under Subchapter B.
443445 Sec. 551.011. REPORTING REQUIREMENTS. (a) A deployer must
444446 notify, in writing, the council, the attorney general, or the
445447 director of the appropriate state agency that regulates the
446448 deployer's industry, and affected consumers as soon as practicable
447449 after the date on which the deployer discovers or is made aware that
448450 a deployed high-risk artificial intelligence system has caused
449451 algorithmic discrimination of an individual or group of
450452 individuals.
451453 (b) If a developer discovers or is made aware that a
452454 deployed high-risk artificial intelligence system is using inputs
453455 or providing outputs that constitute a violation of Subchapter B,
454456 the deployer must cease operation of the offending system as soon as
455457 technically feasible and provide notice to the council and the
456458 attorney general as soon as practicable and not later than the 10th
457459 day after the date on which the developer discovers or is made aware
458460 of the unacceptable risk.
459461 Sec. 551.012. SANDBOX PROGRAM EXCEPTION. (a) Excluding
460462 violations of Subchapter B, this chapter does not apply to the
461463 development of an artificial intelligence system that is used
462464 exclusively for research, training, testing, or other
463465 pre-deployment activities performed by active participants of the
464466 sandbox program in compliance with Chapter 552.
465467 SUBCHAPTER B. PROHIBITED USES AND UNACCEPTABLE RISK
466468 Sec. 551.051. MANIPULATION OF HUMAN BEHAVIOR TO CIRCUMVENT
467469 INFORMED DECISION-MAKING. An artificial intelligence system shall
468470 not be developed or deployed that uses subliminal techniques beyond
469471 a person's consciousness, or purposefully manipulative or
470472 deceptive techniques, with the objective or the effect of
471473 materially distorting the behavior of a person or a group of persons
472474 by appreciably impairing their ability to make an informed
473475 decision, thereby causing a person to make a decision that the
474476 person would not have otherwise made, in a manner that causes or is
475477 likely to cause significant harm to that person or another person or
476478 group of persons.
477479 Sec. 551.052. SOCIAL SCORING. An artificial intelligence
478480 system shall not be developed or deployed for the evaluation or
479481 classification of natural persons or groups of natural persons
480482 based on their social behavior or known, inferred, or predicted
481483 personal characteristics with the intent to determine a social
482484 score or similar categorical estimation or valuation of a person or
483485 groups of persons.
484486 Sec. 551.053. CAPTURE OF BIOMETRIC IDENTIFIERS USING
485487 ARTIFICIAL INTELLIGENCE. An artificial intelligence system
486488 developed with biometric identifiers of individuals and the
487489 targeted or untargeted gathering of images or other media from the
488490 internet or any other publicly available source shall not be
489491 deployed for the purpose of uniquely identifying a specific
490492 individual. An individual is not considered to be informed nor to
491493 have provided consent for such purpose pursuant to Section 503.001,
492494 Business and Commerce Code, based solely upon the existence on the
493495 internet, or other publicly available source, of an image or other
494496 media containing one or more biometric identifiers.
495497 Sec. 551.054. CATEGORIZATION BASED ON SENSITIVE
496498 ATTRIBUTES. An artificial intelligence system shall not be
497499 developed or deployed with the specific purpose of inferring or
498500 interpreting, sensitive personal attributes of a person or group of
499501 persons using biometric identifiers, except for the labeling or
500502 filtering of lawfully acquired biometric identifier data.
501503 Sec. 551.055. UTILIZATION OF PERSONAL ATTRIBUTES FOR HARM.
502504 An artificial intelligence system shall not utilize
503505 characteristics of a person or a specific group of persons based on
504506 their race, color, disability, religion, sex, national origin, age,
505507 or a specific social or economic situation, with the objective, or
506508 the effect, of materially distorting the behavior of that person or
507509 a person belonging to that group in a manner that causes or is
508510 reasonably likely to cause that person or another person harm.
509511 Sec. 551.056. CERTAIN SEXUALLY EXPLICIT VIDEOS, IMAGES, AND
510512 CHILD PORNOGRAPHY. An artificial intelligence system shall not be
511513 developed or deployed that produces, assists, or aids in producing,
512514 or is capable of producing unlawful visual material in violation of
513515 Section 43.26, Penal Code or an unlawful deep fake video or image in
514516 violation of Section 21.165, Penal Code.
515517 SUBCHAPTER C. ENFORCEMENT AND CONSUMER PROTECTIONS
516518 Sec. 551.101. CONSTRUCTION AND APPLICATION. (a) This
517519 chapter shall be broadly construed and applied to promote its
518520 underlying purposes, which are:
519521 (1) to facilitate and advance the responsible
520522 development and use of artificial intelligence systems;
521523 (2) to protect individuals and groups of individuals
522524 from known, and unknown but reasonably foreseeable, risks,
523525 including unlawful algorithmic discrimination;
524526 (3) to provide transparency regarding those risks in
525527 the development, deployment, or use of artificial intelligence
526528 systems; and
527529 (4) to provide reasonable notice regarding the use or
528530 considered use of artificial intelligence systems by state
529531 agencies.
530532 (b) this chapter does not apply to the developer of an open
531533 source artificial intelligence system, provided that:
532534 (1) the system is not deployed as a high-risk
533535 artificial intelligence system and the developer has taken
534536 reasonable steps to ensure that the system cannot be used as a
535537 high-risk artificial intelligence system without substantial
536538 modifications; and
537539 (2) the weights and technical architecture of the
538540 system are made publicly available.
539541 Sec. 551.102. ENFORCEMENT AUTHORITY. The attorney general
540542 has authority to enforce this chapter. Excluding violations of
541543 Subchapter B, researching, training, testing, or the conducting of
542544 other pre-deployment activities by active participants of the
543545 sandbox program, in compliance with Chapter 552, does not subject a
544546 developer or deployer to penalties or actions.
545547 Sec. 551.103. INTERNET WEBSITE AND COMPLAINT MECHANISM.
546548 The attorney general shall post on the attorney general's Internet
547549 website:
548550 (1) information relating to:
549551 (A) the responsibilities of a developer,
550552 distributor, and deployer under Subchapter A; and
551553 (B) an online mechanism through which a consumer
552554 may submit a complaint under this chapter to the attorney general.
553555 Sec. 551.104. INVESTIGATIVE AUTHORITY. (a) If the
554556 attorney general has reasonable cause to believe that a person has
555557 engaged in or is engaging in a violation of this chapter, the
556558 attorney general may issue a civil investigative demand. The
557559 attorney general shall issue such demands in accordance with and
558560 under the procedures established under Section 15.10.
559561 (b) The attorney general may request, pursuant to a civil
560562 investigative demand issued under Subsection (a), that a developer
561563 or deployer of a high-risk artificial intelligence system disclose
562564 their risk management policy and impact assessments required under
563565 Subchapter A. The attorney general may evaluate the risk
564566 management policy and impact assessments for compliance with the
565567 requirements set forth in Subchapter A.
566568 (c) The attorney general may not institute an action for a
567569 civil penalty against a developer or deployer for artificial
568570 intelligence systems that remain isolated from customer
569571 interaction in a pre-deployment environment.
570572 Sec. 551.105. NOTICE OF VIOLATION OF CHAPTER; OPPORTUNITY
571573 TO CURE. Before bringing an action under Section 551.106, the
572574 attorney general shall notify a developer, distributor, or deployer
573575 in writing, not later than the 30th day before bringing the action,
574576 identifying the specific provisions of this chapter the attorney
575577 general alleges have been or are being violated. The attorney
576578 general may not bring an action against the developer or deployer
577579 if:
578580 (1) within the 30-day period, the developer or
579581 deployer cures the identified violation; and
580582 (2) the developer or deployer provides the attorney
581583 general a written statement that the developer or deployer:
582584 (A) cured the alleged violation;
583585 (B) notified the consumer, if technically
584586 feasible, and the council that the developer or deployer's
585587 violation was addressed, if the consumer's contact information has
586588 been made available to the developer or deployer and the attorney
587589 general;
588590 (C) provided supportive documentation to show
589591 how the violation was cured; and
590592 (D) made changes to internal policies, if
591593 necessary, to reasonably ensure that no such further violations are
592594 likely to occur.
593595 Sec. 551.106. CIVIL PENALTY; INJUNCTION. (a) The attorney
594596 general may bring an action in the name of this state to restrain or
595597 enjoin the person from violating this chapter and seek injunctive
596598 relief.
597599 (b) The attorney general may recover reasonable attorney's
598600 fees and other reasonable expenses incurred in investigating and
599601 bringing an action under this section.
600602 (c) The attorney general may assess and collect an
601603 administrative fine against a developer or deployer who fails to
602604 timely cure a violation or who breaches a written statement
603605 provided to the attorney general, other than those for a prohibited
604606 use, of not less than $50,000 and not more than $100,000 per uncured
605607 violation.
606608 (d) The attorney general may assess and collect an
607609 administrative fine against a developer or deployer who fails to
608610 timely cure a violation of a prohibited use, or whose violation is
609611 determined to be uncurable, of not less than $80,000 and not more
610612 than $200,000 per violation.
611613 (e) A developer or deployer who was found in violation of
612614 and continues to operate with the provisions of this chapter shall
613615 be assessed an administrative fine of not less than $2,000 and not
614616 more than $40,000 per day.
615617 (f) There is a rebuttable presumption that a developer,
616618 distributor, or deployer used reasonable care as required under
617619 this chapter if the developer, distributor, or deployer complied
618620 with their duties under Subchapter A.
619621 Sec. 551.107. ENFORCEMENT ACTIONS BY STATE AGENCIES. A
620622 state agency may sanction an individual licensed, registered, or
621623 certified by that agency for violations of Subchapter B, including:
622624 (1) the suspension, probation, or revocation of a
623625 license, registration, certificate, or other form of permission to
624626 engage in an activity; and
625627 (2) monetary penalties up to $100,000.
626628 Sec. 551.108. CONSUMER RIGHTS AND REMEDIES. A consumer may
627629 appeal a consequential decision made by a high-risk artificial
628630 intelligence system which has an adverse impact on their health,
629631 safety, or fundamental rights, and shall have the right to obtain
630632 from the deployer clear and meaningful explanations of the role of
631633 the high-risk artificial intelligence system in the
632634 decision-making procedure and the main elements of the decision
633635 taken.
634636 SUBCHAPTER D. CONSTRUCTION OF CHAPTER; LOCAL PREEMPTION
635637 Sec. 551.151. CONSTRUCTION OF CHAPTER. This chapter may
636638 not be construed as imposing a requirement on a developer, a
637639 deployer, or other person that adversely affects the rights or
638640 freedoms of any person, including the right of free speech.
639641 Sec. 551.152. LOCAL PREEMPTION. This chapter supersedes
640642 and preempts any ordinance, resolution, rule, or other regulation
641643 adopted by a political subdivision regarding the use of high-risk
642644 artificial intelligence systems.
643645 CHAPTER 552. ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM
644646 SUBCHAPTER A. GENERAL PROVISIONS
645647 Sec. 552.001. DEFINITIONS. In this chapter:
646648 (1) "Applicable agency" means a state agency
647649 responsible for regulating a specific sector impacted by an
648650 artificial intelligence system.
649651 (2) "Consumer" means a person who engages in
650652 transactions involving an artificial intelligence system or is
651653 directly affected by the use of such a system.
652654 (3) "Council" means the Artificial Intelligence
653655 Council established by Chapter 553.
654656 (4) "Department" means the Texas Department of
655657 Information Resources.
656658 (5) "Program participant" means a person or business
657659 entity approved to participate in the sandbox program.
658660 (6) "Sandbox program" means the regulatory framework
659661 established under this chapter that allows temporary testing of
660662 artificial intelligence systems in a controlled, limited manner
661663 without full regulatory compliance.
662664 SUBCHAPTER B. SANDBOX PROGRAM FRAMEWORK
663665 Sec. 552.051. ESTABLISHMENT OF SANDBOX PROGRAM. (a) The
664666 department, in coordination with the council, shall administer the
665667 Artificial Intelligence Regulatory Sandbox Program to facilitate
666668 the development, testing, and deployment of innovative artificial
667669 intelligence systems in Texas.
668670 (b) The sandbox program is designed to:
669671 (1) promote the safe and innovative use of artificial
670672 intelligence across various sectors including healthcare, finance,
671673 education, and public services;
672674 (2) encourage the responsible deployment of
673675 artificial intelligence systems while balancing the need for
674676 consumer protection, privacy, and public safety; and
675677 (3) provide clear guidelines for artificial
676678 intelligence developers to test systems while temporarily exempt
677679 from certain regulatory requirements.
678680 Sec. 552.052. APPLICATION PROCESS. (a) A person or
679681 business entity seeking to participate in the sandbox program must
680682 submit an application to the council.
681683 (b) The application must include:
682684 (1) a detailed description of the artificial
683685 intelligence system and its intended use;
684686 (2) a risk assessment that addresses potential impacts
685687 on consumers, privacy, and public safety;
686688 (3) a plan for mitigating any adverse consequences
687689 during the testing phase; and
688690 (4) proof of compliance with federal artificial
689691 intelligence laws and regulations, where applicable.
690692 Sec. 552.053. DURATION AND SCOPE OF PARTICIPATION. A
691693 participant may test an artificial intelligence system under the
692694 sandbox program for a period of up to 36 months, unless extended by
693695 the department for good cause.
694696 SUBCHAPTER C. OVERSIGHT AND COMPLIANCE
695697 Sec. 552.101. AGENCY COORDINATION. (a) The department
696698 shall coordinate with all relevant state regulatory agencies to
697699 oversee the operations of the sandbox participants.
698700 (b) A relevant agency may recommend to the department that a
699701 participant's sandbox privileges be revoked if the artificial
700702 intelligence system:
701703 (1) poses undue risk to public safety or welfare;
702704 (2) violates any federal or state laws that the
703705 sandbox program cannot override.
704706 Sec. 552.102. REPORTING REQUIREMENTS. (a) Each sandbox
705707 participant must submit quarterly reports to the department, which
706708 shall include:
707709 (1) system performance metrics;
708710 (2) updates on how the system mitigates any risks
709711 associated with its operation; and
710712 (3) feedback from consumers and affected stakeholders
711713 that are using a product that has been deployed from this section.
712714 (b) The department must submit an annual report to the
713715 legislature detailing:
714716 (1) the number of participants in the sandbox program;
715717 (2) the overall performance and impact of artificial
716718 intelligence systems tested within the program; and
717719 (3) recommendations for future legislative or
718720 regulatory reforms.
719721 CHAPTER 553. TEXAS ARTIFICIAL INTELLIGENCE COUNCIL
720722 SUBCHAPTER A. CREATION AND ORGANIZATION OF COUNCIL
721723 Sec. 553.001. CREATION OF COUNCIL. (a) The Artificial
722724 Intelligence Council is administratively attached to the office of
723725 the governor, and the office of the governor shall provide
724726 administrative support to the council as provided by this section.
725727 (b) The office of the governor and the council shall enter
726728 into a memorandum of understanding detailing:
727729 (1) the administrative support the council requires
728730 from the office of the governor to fulfill the purposes of this
729731 chapter;
730732 (2) the reimbursement of administrative expenses to
731733 the office of the governor; and
732734 (3) any other provisions available by law to ensure
733735 the efficient operation of the council as attached to the office of
734736 the governor.
735737 (c) The purpose of the council is to:
736738 (1) ensure artificial intelligence systems are
737739 ethical and in the public's best interest and do not harm public
738740 safety or undermine individual freedoms by finding gaps in the
739741 Penal Code and Chapter 82, Civil Practice and Remedies Code and
740742 making recommendations to the Legislature.
741743 (2) identify existing laws and regulations that impede
742744 innovation in artificial intelligence development and recommend
743745 appropriate reforms;
744746 (3) analyze opportunities to improve the efficiency
745747 and effectiveness of state government operations through the use of
746748 artificial intelligence systems;
747749 (4) investigate and evaluate potential instances of
748750 regulatory capture, including undue influence by technology
749751 companies or disproportionate burdens on smaller innovators;
750752 (5) investigate and evaluate the influence of
751753 technology companies on other companies and determine the existence
752754 or use of tools or processes designed to censor competitors or
753755 users; and
754756 (6) offer guidance and recommendations to state
755757 agencies including advisory opinions on the ethical and legal use
756758 of artificial intelligence;
757759 Sec. 553.002. COUNCIL MEMBERSHIP. (a) The council is
758760 composed of 10 members as follows:
759761 (1) four members of the public appointed by the
760762 governor;
761763 (2) two members of the public appointed by the
762764 lieutenant governor;
763765 (3) two members of the public appointed by the speaker
764766 of the house of representatives;
765767 (4) one senator appointed by the lieutenant governor
766768 as a nonvoting member; and
767769 (5) one member of the house of representatives
768770 appointed by the speaker of the house of representatives as a
769771 nonvoting member.
770772 (b) Voting members of the council serve staggered four-year
771773 terms, with the terms of four members expiring every two years.
772774 (c) The governor shall appoint a chair from among the
773775 members, and the council shall elect a vice chair from its
774776 membership.
775777 (d) The council may establish an advisory board composed of
776778 individuals from the public who possess expertise directly related
777779 to the council's functions, including technical, ethical,
778780 regulatory, and other relevant areas.
779781 Sec. 553.003. QUALIFICATIONS. (a) Members of the council
780782 must be Texas residents and have knowledge or expertise in one or
781783 more of the following areas:
782784 (1) artificial intelligence technologies;
783785 (2) data privacy and security;
784786 (3) ethics in technology or law;
785787 (4) public policy and regulation; or
786788 (5) risk management or safety related to artificial
787789 intelligence systems.
788790 (b) Members must not hold an office or profit under the
789791 state or federal government at the time of appointment.
790792 Sec. 553.004. STAFF AND ADMINISTRATION. The council may
791793 employ an executive director and other personnel as necessary to
792794 perform its duties.
793795 SUBCHAPTER B. POWERS AND DUTIES OF THE COUNCIL
794796 Sec. 553.101. ISSUANCE OF ADVISORY OPINIONS. (a) A state
795797 agency may request a written advisory opinion from the council
796798 regarding the use of artificial intelligence systems in the state.
797799 (b) The council may issue advisory opinions on state use of
798800 artificial intelligence systems regarding:
799801 (1) the compliance of artificial intelligence systems
800802 with Texas law;
801803 (2) the ethical implications of artificial
802804 intelligence deployments in the state;
803805 (3) data privacy and security concerns related to
804806 artificial intelligence systems; or
805807 (4) potential liability or legal risks associated with
806808 the use of AI.
807809 Sec. 553.102. RULEMAKING AUTHORITY. (a) The council may
808810 adopt rules necessary to administer its duties under this chapter,
809811 including:
810812 (1) procedures for requesting advisory opinions;
811813 (2) standards for ethical artificial intelligence
812814 development and deployment;
813815 (3) guidelines for evaluating the safety, privacy, and
814816 fairness of artificial intelligence systems.
815817 (b) The council's rules shall align with state laws on
816818 artificial intelligence, technology, data security, and consumer
817819 protection.
818820 Sec. 553.103. TRAINING AND EDUCATIONAL OUTREACH. The
819821 council shall conduct training programs for state agencies and
820822 local governments on the ethical use of artificial intelligence
821823 systems.
822824 SECTION 3. Section 503.001, Business & Commerce Code is
823825 amended by adding Subsection (c-3) to read as follows:
824826 (c-3) This section does not apply to the training,
825827 processing, or storage of biometric identifiers involved in machine
826828 learning or artificial intelligence systems, unless performed for
827829 the purpose of uniquely identifying a specific individual. If a
828830 biometric identifier captured for the purpose of training an
829831 artificial intelligence system is subsequently used for a
830832 commercial purpose, the person possessing the biometric identifier
831833 is subject to this section's provisions for the possession and
832834 destruction of a biometric identifier and the associated penalties.
833835 SECTION 4. Sections 541.051(b), 541.101(a), 541.102(a),
834836 and Sec.541.104(a), Business & Commerce Code, are amended to read
835837 as follows:
836838 Sec. 541.051. CONSUMER'S PERSONAL DATA RIGHTS; REQUEST TO
837839 EXERCISE RIGHTS. (a) A consumer is entitled to exercise the
838840 consumer rights authorized by this section at any time by
839841 submitting a request to a controller specifying the consumer rights
840842 the consumer wishes to exercise. With respect to the processing of
841843 personal data belonging to a known child, a parent or legal guardian
842844 of the child may exercise the consumer rights on behalf of the
843845 child.
844846 (b) A controller shall comply with an authenticated
845847 consumer request to exercise the right to:
846848 (1) confirm whether a controller is processing the
847849 consumer's personal data and to access the personal data;
848850 (2) correct inaccuracies in the consumer's personal
849851 data, taking into account the nature of the personal data and the
850852 purposes of the processing of the consumer's personal data;
851853 (3) delete personal data provided by or obtained about
852854 the consumer;
853855 (4) if the data is available in a digital format,
854856 obtain a copy of the consumer's personal data that the consumer
855857 previously provided to the controller in a portable and, to the
856858 extent technically feasible, readily usable format that allows the
857859 consumer to transmit the data to another controller without
858860 hindrance; [or]
859861 (5) know if the consumer's personal data is or will be
860862 used in any artificial intelligence system and for what purposes;
861863 or
862864 ([5]6) opt out of the processing of the personal data
863865 for purposes of:
864866 (A) targeted advertising;
865867 (B) the sale of personal data; [or]
866868 (C) the sale of personal data for use in
867869 artificial intelligence systems prior to being collected; or
868870 ([C]D) profiling in furtherance of a decision
869871 that produces a legal or similarly significant effect concerning
870872 the consumer.
871873 Sec. 541.101. CONTROLLER DUTIES; TRANSPARENCY. (a) A
872874 controller:
873875 (1) shall limit the collection of personal data to
874876 what is adequate, relevant, and reasonably necessary in relation to
875877 the purposes for which that personal data is processed, as
876878 disclosed to the consumer; [and]
877879 (2) for purposes of protecting the confidentiality,
878880 integrity, and accessibility of personal data, shall establish,
879881 implement, and maintain reasonable administrative, technical, and
880882 physical data security practices that are appropriate to the volume
881883 and nature of the personal data at issue.; and
882884 (3) for purposes of protecting the unauthorized
883885 access, disclosure, alteration, or destruction of data collected,
884886 stored, and processed by artificial intelligence systems, shall
885887 establish, implement, and maintain, reasonable administrative,
886888 technical, and physical data security practices that are
887889 appropriate to the volume and nature of the data collected, stored,
888890 and processed by artificial intelligence systems.
889891 Sec.541.102. PRIVACY NOTICE. (a) A controller shall
890892 provide consumers with a reasonably accessible and clear privacy
891893 notice that includes:
892894 (1) the categories of personal data processed by the
893895 controller, including, if applicable, any sensitive data processed
894896 by the controller;
895897 (2) the purpose for processing personal data;
896898 (3) how consumers may exercise their consumer rights
897899 under Subchapter B, including the process by which a consumer may
898900 appeal a controller's decision with regard to the consumer's
899901 request;
900902 (4) if applicable, the categories of personal data
901903 that the controller shares with third parties;
902904 (5) if applicable, the categories of third parties
903905 with whom the controller shares personal data; [and]
904906 (6) if applicable, an acknowledgement of the
905907 collection, use, and sharing of personal data for artificial
906908 intelligence purposes; and
907909 ([6]7) a description of the methods required under
908910 Section 541.055 through which consumers can submit requests to
909911 exercise their consumer rights under this chapter.
910912 Sec. 541.104. DUTIES OF PROCESSOR. (a) A processor shall
911913 adhere to the instructions of a controller and shall assist the
912914 controller in meeting or complying with the controller's duties or
913915 requirements under this chapter, including:
914916 (1) assisting the controller in responding to consumer
915917 rights requests submitted under Section 541.051 by using
916918 appropriate technical and organizational measures, as reasonably
917919 practicable, taking into account the nature of processing and the
918920 information available to the processor;
919921 (2) assisting the controller with regard to complying
920922 with the [requirement]requirements relating to the security of
921923 processing personal data, and if applicable, the data collected,
922924 stored, and processed by artificial intelligence systems and to the
923925 notification of a breach of security of the processor's system
924926 under Chapter 521, taking into account the nature of processing and
925927 the information available to the processor; and
926928 (3) providing necessary information to enable the
927929 controller to conduct and document data protection assessments
928930 under Section 541.105.
929931 SECTION 5. Subtitle E, Title 4, Labor Code, is amended by
930932 adding Chapter 319 to read as follows:
931933 CHAPTER 319. TEXAS ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT
932934 GRANT PROGRAM
933935 SUBCHAPTER A. GENERAL PROVISIONS
934936 Sec. 319.001. DEFINITIONS. In this chapter:
935937 (1) "Artificial intelligence industry" means
936938 businesses, research organizations, governmental entities, and
937939 educational institutions engaged in the development, deployment,
938940 or use of artificial intelligence technologies in Texas.
939941 (2) "Commission" means the Texas Workforce
940942 Commission.
941943 (3) "Eligible entity" means Texas-based businesses in
942944 the artificial intelligence industry, public school districts,
943945 community colleges, public technical institutes, and workforce
944946 development organizations.
945947 (4) "Program" means the Texas Artificial Intelligence
946948 Workforce Development Grant Program established under this
947949 chapter.
948950 SUBCHAPTER B. ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT GRANT
949951 PROGRAM
950952 Sec. 319.051. ESTABLISHMENT OF GRANT PROGRAM. (a) The
951953 commission shall establish the Texas Artificial Intelligence
952954 Workforce Development Grant Program to:
953955 (1) support and assist Texas-based artificial
954956 intelligence companies in developing a skilled workforce;
955957 (2) provide grants to local community colleges and
956958 public high schools to implement or expand career and technical
957959 education programs focused on artificial intelligence readiness
958960 and skill development; and
959961 (3) offer opportunities to retrain and reskill workers
960962 through partnerships with the artificial intelligence industry and
961963 workforce development programs.
962964 (b) The program is intended to:
963965 (1) prepare Texas workers and students for employment
964966 in the rapidly growing artificial intelligence industry;
965967 (2) support the creation of postsecondary programs and
966968 certifications relevant to current artificial intelligence
967969 opportunities;
968970 (3) ensure that Texas maintains a competitive edge in
969971 artificial intelligence innovation and workforce development; and
970972 (4) address workforce gaps in artificial
971973 intelligence-related fields, including data science,
972974 cybersecurity, machine learning, robotics, and automation.
973975 (c) The commission shall adopt rules necessary to implement
974976 this subchapter.
975977 Sec. 319.052. FEDERAL FUNDS AND GIFTS, GRANTS, AND
976978 DONATIONS.
977979 In addition to other money appropriated by the legislature,
978980 for the purpose of providing artificial intelligence workforce
979981 opportunities under the program established under this subchapter
980982 the commission may:
981983 (1) seek and apply for any available federal funds;
982984 and
983985 (2) solicit and accept gifts, grants, and donations
984986 from any other source, public or private, as necessary to ensure
985987 effective implementation of the program.
986988 Sec. 319.053. ELIGIBILITY FOR GRANTS. (a) The following
987989 entities are eligible to apply for grants under this program:
988990 (1) Texas-based businesses engaged in the development
989991 or deployment of artificial intelligence technologies;
990992 (2) public school districts and charter schools
991993 offering or seeking to offer career and technical education
992994 programs in artificial intelligence-related fields or to update
993995 existing curricula to address these fields;
994996 (3) public community colleges and technical
995997 institutes that develop artificial intelligence-related curricula
996998 or training programs or update existing curricula or training
997999 programs to incorporate artificial intelligence training; and
9981000 (4) workforce development organizations in
9991001 partnership with artificial intelligence companies to reskill and
10001002 retrain workers in artificial intelligence competencies.
10011003 (b) To be eligible, the entity must:
10021004 (1) submit an application to the commission in the
10031005 form and manner prescribed by the commission; and
10041006 (2) demonstrate the capacity to develop and implement
10051007 training, educational, or workforce development programs that
10061008 align with the needs of the artificial intelligence industry in
10071009 Texas and lead to knowledge, skills, and work-based experiences
10081010 that are transferable to similar employment opportunities in the
10091011 artificial intelligence industry.
10101012 Sec. 319.054. USE OF GRANTS. (a) Grants awarded under the
10111013 program may be used for:
10121014 (1) developing or expanding workforce training
10131015 programs for artificial intelligence-related skills, including but
10141016 not limited to machine learning, data analysis, software
10151017 development, and robotics;
10161018 (2) creating or enhancing career and technical
10171019 education programs in artificial intelligence for high school
10181020 students, with a focus on preparing them for careers in artificial
10191021 intelligence or related fields;
10201022 (3) providing financial support for instructors,
10211023 equipment, and technology necessary for artificial
10221024 intelligence-related workforce training;
10231025 (4) partnering with local businesses to develop
10241026 internship programs, on-the-job training opportunities, instructor
10251027 externships, and apprenticeships in the artificial intelligence
10261028 industry;
10271029 (5) funding scholarships or stipends for students,
10281030 instructors, and workers participating in artificial intelligence
10291031 training programs, particularly for individuals from underserved
10301032 or underrepresented communities; or
10311033 (6) reskilling and retraining workers displaced by
10321034 technological changes or job automation, with an emphasis on
10331035 artificial intelligence-related job roles.
10341036 (b) The commission shall prioritize funding for:
10351037 (1) initiatives that partner with rural and
10361038 underserved communities to promote artificial intelligence
10371039 education and career pathways;
10381040 (2) programs that lead to credentials of value in
10391041 artificial intelligence or related fields; and
10401042 (3) proposals that include partnerships between the
10411043 artificial intelligence industry, a public or private institution
10421044 of higher education in this state, and workforce development
10431045 organizations.
10441046 SECTION 6. Section 325.011, Government Code, is amended to
10451047 read as follows:
10461048 Sec. 325.011. CRITERIA FOR REVIEW. The commission and its
10471049 staff shall consider the following criteria in determining whether
10481050 a public need exists for the continuation of a state agency or its
10491051 advisory committees or for the performance of the functions of the
10501052 agency or its advisory committees:
10511053 (1) the efficiency and effectiveness with which the
10521054 agency or the advisory committee operates;
10531055 (2)(A) an identification of the mission, goals, and
10541056 objectives intended for the agency or advisory committee and of the
10551057 problem or need that the agency or advisory committee was intended
10561058 to address; and
10571059 (B) the extent to which the mission, goals, and
10581060 objectives have been achieved and the problem or need has been
10591061 addressed;
10601062 (3)(A) an identification of any activities of the
10611063 agency in addition to those granted by statute and of the authority
10621064 for those activities; and
10631065 (B) the extent to which those activities are
10641066 needed;
10651067 (4) an assessment of authority of the agency relating
10661068 to fees, inspections, enforcement, and penalties;
10671069 (5) whether less restrictive or alternative methods of
10681070 performing any function that the agency performs could adequately
10691071 protect or provide service to the public;
10701072 (6) the extent to which the jurisdiction of the agency
10711073 and the programs administered by the agency overlap or duplicate
10721074 those of other agencies, the extent to which the agency coordinates
10731075 with those agencies, and the extent to which the programs
10741076 administered by the agency can be consolidated with the programs of
10751077 other state agencies;
10761078 (7) the promptness and effectiveness with which the
10771079 agency addresses complaints concerning entities or other persons
10781080 affected by the agency, including an assessment of the agency's
10791081 administrative hearings process;
10801082 (8) an assessment of the agency's rulemaking process
10811083 and the extent to which the agency has encouraged participation by
10821084 the public in making its rules and decisions and the extent to which
10831085 the public participation has resulted in rules that benefit the
10841086 public;
10851087 (9) the extent to which the agency has complied with:
10861088 (A) federal and state laws and applicable rules
10871089 regarding equality of employment opportunity and the rights and
10881090 privacy of individuals; and
10891091 (B) state law and applicable rules of any state
10901092 agency regarding purchasing guidelines and programs for
10911093 historically underutilized businesses;
10921094 (10) the extent to which the agency issues and
10931095 enforces rules relating to potential conflicts of interest of its
10941096 employees;
10951097 (11) the extent to which the agency complies with
10961098 Chapters 551 and 552 and follows records management practices that
10971099 enable the agency to respond efficiently to requests for public
10981100 information;
10991101 (12) the effect of federal intervention or loss of
11001102 federal funds if the agency is abolished;
11011103 (13) the extent to which the purpose and effectiveness
11021104 of reporting requirements imposed on the agency justifies the
11031105 continuation of the requirement; [and]
11041106 (14) an assessment of the agency's cybersecurity
11051107 practices using confidential information available from the
11061108 Department of Information Resources or any other appropriate state
11071109 agency; and
11081110 (15) an assessment, using information available from
11091111 the Department of Information Resources, the Attorney General, or
11101112 any other appropriate state agency, of the agency's use of
11111113 artificial intelligence systems, high-risk artificial intelligence
11121114 systems, in its operations and its oversight of the use of
11131115 artificial intelligence systems by entities or persons under the
11141116 agency's jurisdiction, and any related impact on the agency's
11151117 ability to achieve its mission, goals, and objectives.
11161118 SECTION 7. Section 2054.068(b), Government Code, is amended
11171119 to read as follows:
11181120 (b) The department shall collect from each state agency
11191121 information on the status and condition of the agency's information
11201122 technology infrastructure, including information regarding:
11211123 (1) the agency's information security program;
11221124 (2) an inventory of the agency's servers, mainframes,
11231125 cloud services, and other information technology equipment;
11241126 (3) identification of vendors that operate and manage
11251127 the agency's information technology infrastructure; [and]
11261128 (4) any additional related information requested by
11271129 the department; and
11281130 (5) an evaluation of the use, or considered use, of
11291131 artificial intelligence systems and high-risk artificial
11301132 intelligence systems by each state agency.
11311133 SECTION 8. Section 2054.0965(b), Government Code, is
11321134 amended to read as follows:
11331135 Sec. 2054.0965. INFORMATION RESOURCES DEPLOYMENT REVIEW.
11341136 (b) Except as otherwise modified by rules adopted by the
11351137 department, the review must include:
11361138 (1) an inventory of the agency's major information
11371139 systems, as defined by Section 2054.008, and other operational or
11381140 logistical components related to deployment of information
11391141 resources as prescribed by the department;
11401142 (2) an inventory of the agency's major databases,
11411143 artificial intelligence systems, and applications;
11421144 (3) a description of the agency's existing and planned
11431145 telecommunications network configuration;
11441146 (4) an analysis of how information systems,
11451147 components, databases, applications, and other information
11461148 resources have been deployed by the agency in support of:
11471149 (A) applicable achievement goals established
11481150 under Section 2056.006 and the state strategic plan adopted under
11491151 Section 2056.009;
11501152 (B) the state strategic plan for information
11511153 resources; and
11521154 (C) the agency's business objectives, mission,
11531155 and goals;
11541156 (5) agency information necessary to support the state
11551157 goals for interoperability and reuse; and
11561158 (6) confirmation by the agency of compliance with
11571159 state statutes, rules, and standards relating to information
11581160 resources.
11591161 SECTION 9. Not later than September 1, 2025, the attorney
11601162 general shall post on the attorney general's Internet website the
11611163 information and online mechanism required by Section 551.041,
11621164 Business & Commerce Code, as added by this Act.
11631165 SECTION 10. This Act takes effect September 1, 2025.