2 | 4 | | |
---|
3 | 5 | | |
---|
4 | 6 | | A BILL TO BE ENTITLED |
---|
5 | 7 | | AN ACT |
---|
6 | 8 | | relating to the regulation and reporting on the use of artificial |
---|
7 | 9 | | intelligence systems by certain business entities and state |
---|
8 | 10 | | agencies; providing civil penalties. |
---|
9 | 11 | | BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS: |
---|
10 | 12 | | SECTION 1. This Act may be cited as the Texas Responsible |
---|
11 | 13 | | Artificial Intelligence Governance Act |
---|
12 | 14 | | SECTION 2. Title 11, Business & Commerce Code, is amended by |
---|
13 | 15 | | adding Subtitle D to read as follows: |
---|
14 | 16 | | SUBTITLE D. ARTIFICIAL INTELLIGENCE PROTECTION |
---|
15 | 17 | | CHAPTER 551. ARTIFICIAL INTELLIGENCE PROTECTION |
---|
16 | 18 | | SUBCHAPTER A. GENERAL PROVISIONS |
---|
17 | 19 | | Sec. 551.001. DEFINITIONS. In this chapter: |
---|
18 | 20 | | (1) "Algorithmic discrimination" means any condition |
---|
19 | 21 | | in which an artificial intelligence system when deployed creates an |
---|
20 | 22 | | unlawful discrimination of a protected classification in violation |
---|
21 | 23 | | of the laws of this state or federal law. |
---|
22 | 24 | | (A) "Algorithmic discrimination" does not |
---|
23 | 25 | | include the offer, license, or use of a high-risk artificial |
---|
24 | 26 | | intelligence system by a developer or deployer for the sole purpose |
---|
25 | 27 | | of the developer's or deployer's self-testing, for a non-deployed |
---|
26 | 28 | | purpose, to identify, mitigate, or prevent discrimination or |
---|
27 | 29 | | otherwise ensure compliance with state and federal law. |
---|
28 | 30 | | (2) "Artificial intelligence system" means the use of |
---|
29 | 31 | | machine learning and related technologies that use data to train |
---|
30 | 32 | | statistical models for the purpose of enabling computer systems to |
---|
31 | 33 | | perform tasks normally associated with human intelligence or |
---|
32 | 34 | | perception, such as computer vision, speech or natural language |
---|
33 | 35 | | processing, and content generation. |
---|
34 | 36 | | (3) "Biometric identifier" means a retina or iris |
---|
35 | 37 | | scan, fingerprint, voiceprint, or record of hand or face geometry. |
---|
36 | 38 | | (4) "Council" means the Artificial Intelligence |
---|
37 | 39 | | Council established under Chapter 553. |
---|
38 | 40 | | (5) "Consequential decision" means any decision that |
---|
39 | 41 | | has a material, legal, or similarly significant, effect on a |
---|
40 | 42 | | consumer's access to, cost of, or terms or conditions of: |
---|
41 | 43 | | (A) a criminal case assessment, a sentencing or |
---|
42 | 44 | | plea agreement analysis, or a pardon, parole, probation, or release |
---|
43 | 45 | | decision; |
---|
44 | 46 | | (B) education enrollment or an education |
---|
45 | 47 | | opportunity; |
---|
46 | 48 | | (C) employment or an employment opportunity; |
---|
47 | 49 | | (D) a financial service; |
---|
48 | 50 | | (E) an essential government service; |
---|
49 | 51 | | (F) residential utility services; |
---|
50 | 52 | | (G) a health-care service or treatment; |
---|
51 | 53 | | (H) housing; |
---|
52 | 54 | | (I) insurance; |
---|
53 | 55 | | (J) a legal service; |
---|
54 | 56 | | (K) a transportation service; |
---|
55 | 57 | | (L) constitutionally protected services or |
---|
56 | 58 | | products; or |
---|
57 | 59 | | (M) elections or voting process. |
---|
58 | 60 | | (6) "Consumer" means an individual who is a resident |
---|
59 | 61 | | of this state acting only in an individual or household context. |
---|
60 | 62 | | The term does not include an individual acting in a commercial or |
---|
61 | 63 | | employment context. |
---|
62 | 64 | | (7) "Deploy" means to put into effect or |
---|
63 | 65 | | commercialize. |
---|
64 | 66 | | (8) "Deployer" means a person doing business in this |
---|
65 | 67 | | state that deploys a high-risk artificial intelligence system. |
---|
66 | 68 | | (9) "Developer" means a person doing business in this |
---|
67 | 69 | | state that develops a high-risk artificial intelligence system or |
---|
68 | 70 | | substantially or intentionally modifies an artificial intelligence |
---|
69 | 71 | | system. |
---|
70 | 72 | | (10) "Digital service" means a website, an |
---|
71 | 73 | | application, a program, or software that collects or processes |
---|
72 | 74 | | personal identifying information with Internet connectivity. |
---|
73 | 75 | | (11) "Digital service provider" means a person who: |
---|
74 | 76 | | (A) owns or operates a digital service; |
---|
75 | 77 | | (B) determines the purpose of collecting and |
---|
76 | 78 | | processing the personal identifying information of users of the |
---|
77 | 79 | | digital service; and |
---|
78 | 80 | | (C) determines the means used to collect and |
---|
79 | 81 | | process the personal identifying information of users of the |
---|
80 | 82 | | digital service. |
---|
81 | 83 | | (12) "Distributor" means a person, other than the |
---|
82 | 84 | | Developer, that makes an artificial intelligence system available |
---|
83 | 85 | | in the market for a commercial purpose. |
---|
84 | 86 | | (13) "Generative artificial intelligence" means |
---|
85 | 87 | | artificial intelligence models that can emulate the structure and |
---|
86 | 88 | | characteristics of input data in order to generate derived |
---|
87 | 89 | | synthetic content. This can include images, videos, audio, text, |
---|
88 | 90 | | and other digital content. |
---|
89 | 91 | | (14) "High-risk artificial intelligence system" means |
---|
90 | 92 | | any artificial intelligence system that is a substantial factor to |
---|
91 | 93 | | a consequential decision. The term does not include: |
---|
92 | 94 | | (A) an artificial intelligence system if the |
---|
93 | 95 | | artificial intelligence system is intended to detect |
---|
94 | 96 | | decision-making patterns or deviations from prior decision-making |
---|
95 | 97 | | patterns and is not intended to replace or influence a previously |
---|
96 | 98 | | completed human assessment without sufficient human review; |
---|
97 | 99 | | (B) an artificial intelligence system that |
---|
98 | 100 | | violates a provision of Subchapter B; or |
---|
99 | 101 | | (C) the following technologies, unless the |
---|
100 | 102 | | technologies, when deployed, make, or are a substantial factor in |
---|
101 | 103 | | making, a consequential decision: |
---|
102 | 104 | | (i) anti-malware; |
---|
103 | 105 | | (ii) anti-virus; |
---|
104 | 106 | | (iii) calculators; |
---|
105 | 107 | | (iv) cybersecurity; |
---|
106 | 108 | | (v) databases; |
---|
107 | 109 | | (vi) data storage; |
---|
108 | 110 | | (vii) firewall; |
---|
109 | 111 | | (viii) fraud detection systems; |
---|
110 | 112 | | (ix) internet domain registration; |
---|
111 | 113 | | (x) internet website loading; |
---|
112 | 114 | | (xi) networking; |
---|
113 | 115 | | (xii) operational technology; |
---|
114 | 116 | | (xiii) spam- and robocall-filtering; |
---|
115 | 117 | | (xiv) spell-checking; |
---|
116 | 118 | | (xv) spreadsheets; |
---|
117 | 119 | | (xvi) web caching; |
---|
118 | 120 | | (xvii) web scraping; |
---|
119 | 121 | | (xviii) web hosting or any similar |
---|
120 | 122 | | technology; or |
---|
121 | 123 | | (xviv) any technology that solely |
---|
122 | 124 | | communicates in natural language for the sole purpose of providing |
---|
123 | 125 | | users with information, making referrals or recommendations |
---|
124 | 126 | | relating to customer service, and answering questions and is |
---|
125 | 127 | | subject to an acceptable use policy that prohibits generating |
---|
126 | 128 | | content that is discriminatory or harmful, as long as the system |
---|
127 | 129 | | does not violate any provision listed in Subchapter B. |
---|
128 | 130 | | (15) "Open source artificial intelligence system" |
---|
129 | 131 | | means an artificial intelligence system that: |
---|
130 | 132 | | (A) can be used or modified for any purpose |
---|
131 | 133 | | without securing permission from the owner or creator of such an |
---|
132 | 134 | | artificial intelligence system; |
---|
133 | 135 | | (B) can be shared for any use with or without |
---|
134 | 136 | | modifications; and |
---|
135 | 137 | | (C) includes information about the data used to |
---|
136 | 138 | | train such system that is sufficiently detailed such that a person |
---|
137 | 139 | | skilled in artificial intelligence could create a substantially |
---|
138 | 140 | | equivalent system when the following are made available freely or |
---|
139 | 141 | | through a non-restrictive license: |
---|
140 | 142 | | (i) the same or similar data; |
---|
141 | 143 | | (ii) the source code used to train and run |
---|
142 | 144 | | such system; and |
---|
143 | 145 | | (iii) the model weights and parameters of |
---|
144 | 146 | | such system. |
---|
145 | 147 | | (16) "Operational technology" means hardware and |
---|
146 | 148 | | software that detects or causes a change through the direct |
---|
147 | 149 | | monitoring or control of physical devices, processes, and events in |
---|
148 | 150 | | the enterprise. |
---|
149 | 151 | | (17) "Personal data" has the meaning assigned to it by |
---|
150 | 152 | | Section 541.001, Business and Commerce Code. |
---|
151 | 153 | | (18) "Risk" means the composite measure of an event's |
---|
152 | 154 | | probability of occurring and the magnitude or degree of the |
---|
153 | 155 | | consequences of the corresponding event. |
---|
154 | 156 | | (19) "Sensitive personal attribute" means race, |
---|
155 | 157 | | political opinions, religious or philosophical beliefs, ethnic |
---|
156 | 158 | | orientation, mental health diagnosis, or sex. The term does not |
---|
157 | 159 | | include conduct that would be classified as an offense under |
---|
158 | 160 | | Chapter 21, Penal Code. |
---|
159 | 161 | | (20) "Social media platform" has the meaning assigned |
---|
160 | 162 | | by Section 120.001, Business and Commerce Code. |
---|
161 | 163 | | (21) "Substantial factor" means a factor that is: |
---|
162 | 164 | | (A) considered when making a consequential |
---|
163 | 165 | | decision; |
---|
164 | 166 | | (B) likely to alter the outcome of a |
---|
165 | 167 | | consequential decision; and |
---|
166 | 168 | | (C) weighed more heavily than any other factor |
---|
167 | 169 | | contributing to the consequential decision. |
---|
168 | 170 | | (22) "Intentional and substantial modification" or |
---|
169 | 171 | | "Substantial modification" means a deliberate change made to an |
---|
170 | 172 | | artificial intelligence system that reasonably increases the risk |
---|
171 | 173 | | of algorithmic discrimination. |
---|
172 | 174 | | Sec. 551.002. APPLICABILITY OF CHAPTER. This chapter |
---|
173 | 175 | | applies only to a person that is not a small business as defined by |
---|
174 | 176 | | the United States Small Business Administration, and: |
---|
175 | 177 | | (1) conducts business, promotes, or advertises in this |
---|
176 | 178 | | state or produces a product or service consumed by residents of this |
---|
177 | 179 | | state; or |
---|
178 | 180 | | (2) engages in the development, distribution, or |
---|
179 | 181 | | deployment of a high-risk artificial intelligence system in this |
---|
180 | 182 | | state. |
---|
181 | 183 | | Sec. 551.003. DEVELOPER DUTIES. (a) A developer of a |
---|
182 | 184 | | high-risk artificial intelligence system shall use reasonable care |
---|
183 | 185 | | to protect consumers from any known or reasonably foreseeable risks |
---|
184 | 186 | | of algorithmic discrimination arising from the intended and |
---|
185 | 187 | | contracted uses of the high-risk artificial intelligence system. |
---|
186 | 188 | | (b) Prior to providing a high-risk artificial intelligence |
---|
187 | 189 | | system to a deployer, a developer shall provide to the deployer, in |
---|
188 | 190 | | writing, a High-Risk Report that consists of: |
---|
189 | 191 | | (1) a statement describing how the high-risk |
---|
190 | 192 | | artificial intelligence system should be used or not be used; |
---|
191 | 193 | | (2) any known limitations of the system that could |
---|
192 | 194 | | lead to algorithmic discrimination, the metrics used to measure the |
---|
193 | 195 | | system's performance, which shall include at a minimum, metrics |
---|
194 | 196 | | related to accuracy, explainability, transparency, reliability, |
---|
195 | 197 | | and security set forth in the most recent version of the "Artificial |
---|
196 | 198 | | Intelligence Risk Management Framework: Generative Artificial |
---|
197 | 199 | | Intelligence Profile" published by the National Institute of |
---|
198 | 200 | | Standards and Technology, and how the system performs under those |
---|
199 | 201 | | metrics in its intended use contexts; |
---|
200 | 202 | | (3) any known or reasonably foreseeable risks of |
---|
201 | 203 | | algorithmic discrimination, arising from its intended or likely |
---|
202 | 204 | | use; |
---|
203 | 205 | | (4) a high-level summary of the type of data used to |
---|
204 | 206 | | program or train the high-risk artificial intelligence system; |
---|
205 | 207 | | (5) the data governance measures used to cover the |
---|
206 | 208 | | training datasets and their collection, and the measures used to |
---|
207 | 209 | | examine the suitability of data sources and prevent unlawful |
---|
208 | 210 | | discriminatory biases; and |
---|
209 | 211 | | (6) appropriate principles, processes, and personnel |
---|
210 | 212 | | for the deployers' risk management policy. |
---|
211 | 213 | | (c) If a high-risk artificial intelligence system is |
---|
212 | 214 | | intentionally or substantially modified after a developer provides |
---|
213 | 215 | | it to a deployer, a developer shall make necessary information in |
---|
214 | 216 | | subsection (b) available to deployers within 30 days of the |
---|
215 | 217 | | modification. |
---|
216 | 218 | | (d) If a developer believes or has reason to believe, that |
---|
217 | 219 | | it deployed a high-risk artificial intelligence system that does |
---|
218 | 220 | | not comply with a requirement of this chapter, the developer shall |
---|
219 | 221 | | immediately take the necessary corrective actions to bring that |
---|
220 | 222 | | system into compliance, including by withdrawing it, disabling it, |
---|
221 | 223 | | and recalling it, as appropriate. Where applicable, the developer |
---|
222 | 224 | | shall inform the distributors or deployers of the high-risk |
---|
223 | 225 | | artificial intelligence system concerned. |
---|
224 | 226 | | (e) Where the high-risk artificial intelligence system |
---|
225 | 227 | | presents risks of algorithmic discrimination, unlawful use or |
---|
226 | 228 | | disclosure of personal data, or deceptive manipulation or coercion |
---|
227 | 229 | | of human behavior and the developer knows or should reasonably know |
---|
228 | 230 | | of that risk, it shall immediately investigate the causes, in |
---|
229 | 231 | | collaboration with the deployer, where applicable, and inform the |
---|
230 | 232 | | attorney general in writing of the nature of the non-compliance and |
---|
231 | 233 | | of any relevant corrective action taken. |
---|
232 | 234 | | (f) Developers shall keep detailed records of any |
---|
233 | 235 | | generative artificial intelligence training data used to develop a |
---|
234 | 236 | | generative artificial intelligence system or service, consistent |
---|
235 | 237 | | with the suggested actions under GV-1.2-007 of the "Artificial |
---|
236 | 238 | | Intelligence Risk Management Framework: Generative Artificial |
---|
237 | 239 | | Intelligence Profile" by the National Institute of Standards and |
---|
238 | 240 | | Technology, or any subsequent versions thereof. |
---|
239 | 241 | | Sec. 551.004. DISTRIBUTOR DUTIES. A distributor of a |
---|
240 | 242 | | high-risk artificial intelligence system shall use reasonable care |
---|
241 | 243 | | to protect consumers from any known or reasonably foreseeable risks |
---|
242 | 244 | | of algorithmic discrimination. If a distributor of a high-risk |
---|
243 | 245 | | artificial intelligence system knows or has reason to know that a |
---|
244 | 246 | | high-risk artificial intelligence system is not in compliance with |
---|
245 | 247 | | any requirement in this chapter, it shall immediately withdraw, |
---|
246 | 248 | | disable, or recall as appropriate, the high-risk artificial |
---|
247 | 249 | | intelligence system from the market until the system has been |
---|
248 | 250 | | brought into compliance with the requirements of this chapter. The |
---|
249 | 251 | | distributor shall inform the developers of the high-risk artificial |
---|
250 | 252 | | intelligence system concerned and, where applicable, the |
---|
251 | 253 | | deployers. |
---|
252 | 254 | | Sec. 551.005. DEPLOYER DUTIES. A deployer of a high-risk |
---|
253 | 255 | | artificial intelligence system shall use reasonable care to protect |
---|
254 | 256 | | consumers from any known or reasonably foreseeable risks of |
---|
255 | 257 | | algorithmic discrimination. If a deployer of a high-risk |
---|
256 | 258 | | artificial intelligence system knows or has reason to know that a |
---|
257 | 259 | | high-risk artificial intelligence system is not in compliance with |
---|
258 | 260 | | any requirement in this chapter, it shall immediately suspend the |
---|
259 | 261 | | use of the high-risk artificial intelligence system from the market |
---|
260 | 262 | | until the system has been brought into compliance with the |
---|
261 | 263 | | requirements of this chapter. The deployer shall inform the |
---|
262 | 264 | | developers of the high-risk artificial intelligence system |
---|
263 | 265 | | concerned and, where applicable, the distributors. |
---|
264 | 266 | | Sec. 551.006. IMPACT ASSESSMENTS. (a) A deployer that |
---|
265 | 267 | | deploys a high-risk artificial intelligence system shall complete |
---|
266 | 268 | | an impact assessment for the high-risk artificial intelligence |
---|
267 | 269 | | system. A deployer, or a third-party contracted by the deployer for |
---|
268 | 270 | | such purposes, shall complete an impact assessment annually and |
---|
269 | 271 | | within ninety days after any intentional and substantial |
---|
270 | 272 | | modification to the high-risk artificial intelligence system is |
---|
271 | 273 | | made available. An impact assessment must include, at a minimum, |
---|
272 | 274 | | and to the extent reasonably known by or available to the deployer: |
---|
273 | 275 | | (1) a statement by the deployer disclosing the |
---|
274 | 276 | | purpose, intended use cases, and deployment context of, and |
---|
275 | 277 | | benefits afforded by, the high-risk artificial intelligence |
---|
276 | 278 | | system; |
---|
277 | 279 | | (2) an analysis of whether the deployment of the |
---|
278 | 280 | | high-risk artificial intelligence system poses any known or |
---|
279 | 281 | | reasonably foreseeable risks of algorithmic discrimination and, if |
---|
280 | 282 | | so, the nature of the algorithmic discrimination and the steps that |
---|
281 | 283 | | have been taken to mitigate the risks; |
---|
282 | 284 | | (3) a description of the categories of data the |
---|
283 | 285 | | high-risk artificial intelligence system processes as inputs and |
---|
284 | 286 | | the outputs the high-risk artificial intelligence system produces; |
---|
285 | 287 | | (4) if the deployer used data to customize the |
---|
286 | 288 | | high-risk artificial intelligence system, an overview of the |
---|
287 | 289 | | categories of data the deployer used to customize the high-risk |
---|
288 | 290 | | artificial intelligence system; |
---|
289 | 291 | | (5) any metrics used to evaluate the performance and |
---|
290 | 292 | | known limitations of the high-risk artificial intelligence system; |
---|
291 | 293 | | (6) a description of any transparency measures taken |
---|
292 | 294 | | concerning the high-risk artificial intelligence system, including |
---|
293 | 295 | | any measures taken to disclose to a consumer that the high-risk |
---|
294 | 296 | | artificial intelligence system will be used; |
---|
295 | 297 | | (7) a description of the post-deployment monitoring |
---|
296 | 298 | | and user safeguards provided concerning the high-risk artificial |
---|
297 | 299 | | intelligence system, including the oversight, use, and learning |
---|
298 | 300 | | process established by the deployer to address issues arising from |
---|
299 | 301 | | the deployment of the high-risk artificial intelligence system; and |
---|
300 | 302 | | (8) a description of cybersecurity measures and threat |
---|
301 | 303 | | modeling conducted on the system. |
---|
302 | 304 | | (b) Following an intentional and substantial modification |
---|
303 | 305 | | to a high-risk artificial intelligence system, a deployer must |
---|
304 | 306 | | disclose the extent to which the high-risk artificial intelligence |
---|
305 | 307 | | system was used in a manner that was consistent with, or varied |
---|
306 | 308 | | from, the developer's intended uses of the high-risk artificial |
---|
307 | 309 | | intelligence system. |
---|
308 | 310 | | (c) A single impact assessment may address a comparable set |
---|
309 | 311 | | of high-risk artificial intelligence systems deployed by a |
---|
310 | 312 | | deployer. |
---|
311 | 313 | | (d) A deployer shall maintain the most recently completed |
---|
312 | 314 | | impact assessment for a high-risk artificial intelligence system, |
---|
313 | 315 | | all records concerning each impact assessment, and all prior impact |
---|
314 | 316 | | assessments, if any, for at least three years following the final |
---|
315 | 317 | | deployment of the high-risk artificial intelligence system. |
---|
316 | 318 | | (e) If a deployer, or a third party contracted by the |
---|
317 | 319 | | deployer, completes an impact assessment for the purpose of |
---|
318 | 320 | | complying with another applicable law or regulation, such impact |
---|
319 | 321 | | assessment shall be deemed to satisfy the requirements established |
---|
320 | 322 | | in this subsection if such impact assessment is reasonably similar |
---|
321 | 323 | | in scope and effect to the impact assessment that would otherwise be |
---|
322 | 324 | | completed pursuant to this subsection. |
---|
323 | 325 | | (f) A deployer may redact any trade secrets as defined by |
---|
324 | 326 | | Section 541.001(33), Business & Commerce Code or information |
---|
325 | 327 | | protected from disclosure by state or federal law. |
---|
326 | 328 | | (g) Except as provided in subsection (e) of this section, a |
---|
327 | 329 | | developer that makes a high-risk artificial intelligence system |
---|
328 | 330 | | available to a deployer shall make available to the deployer the |
---|
329 | 331 | | documentation and information necessary for a deployer to complete |
---|
330 | 332 | | an impact assessment pursuant to this section. |
---|
331 | 333 | | (h) A developer that also serves as a deployer for a |
---|
332 | 334 | | high-risk artificial intelligence system is not required to |
---|
333 | 335 | | generate and store an impact assessment unless the high-risk |
---|
334 | 336 | | artificial intelligence system is provided to an unaffiliated |
---|
335 | 337 | | deployer. |
---|
336 | 338 | | Sec. 551.007. DISCLOSURE OF A HIGH-RISK ARTIFICIAL |
---|
337 | 339 | | INTELLIGENCE SYSTEM TO CONSUMERS. (a) A deployer or developer that |
---|
338 | 340 | | deploys, offers, sells, leases, licenses, gives, or otherwise makes |
---|
339 | 341 | | available a high-risk artificial intelligence system that is |
---|
340 | 342 | | intended to interact with consumers shall disclose to each |
---|
341 | 343 | | consumer, before or at the time of interaction: |
---|
342 | 344 | | (1) that the consumer is interacting with an |
---|
343 | 345 | | artificial intelligence system; |
---|
344 | 346 | | (2) the purpose of the system; |
---|
345 | 347 | | (3) that the system may or will make a consequential |
---|
346 | 348 | | decision affecting the consumer; |
---|
347 | 349 | | (4) the nature of any consequential decision in which |
---|
348 | 350 | | the system is or may be a substantial factor; |
---|
349 | 351 | | (5) the factors to be used in making any consequential |
---|
350 | 352 | | decisions; |
---|
351 | 353 | | (6) contact information of the deployer; |
---|
352 | 354 | | (7) a description of: |
---|
353 | 355 | | (A) any human components of the system; |
---|
354 | 356 | | (B) any automated components of the system; and |
---|
355 | 357 | | (C) how human and automated components are used |
---|
356 | 358 | | to inform a consequential decision; and |
---|
357 | 359 | | (8) a declaration of the consumer's rights under |
---|
358 | 360 | | Section 551.108. |
---|
359 | 361 | | (b) Disclosure is required under subsection (a) of this |
---|
360 | 362 | | section regardless of whether it would be obvious to a reasonable |
---|
361 | 363 | | person that the person is interacting with an artificial |
---|
362 | 364 | | intelligence system. |
---|
363 | 365 | | (c) All disclosures under subsection (a) shall be clear and |
---|
364 | 366 | | conspicuous and written in plain language, and avoid the use of a |
---|
365 | 367 | | dark pattern as defined by 541.001, Business & Commerce Code. |
---|
366 | 368 | | (d) All disclosures under subsection (a) may be linked to a |
---|
367 | 369 | | separate webpage of the developer or deployer. |
---|
368 | 370 | | (e) Any requirement in this section that may conflict with |
---|
369 | 371 | | state or federal law may be exempt. |
---|
370 | 372 | | Sec. 551.008. RISK IDENTIFICATION AND MANAGEMENT POLICY. |
---|
371 | 373 | | (a) A developer or deployer of a high-risk artificial intelligence |
---|
372 | 374 | | system shall, prior to deployment, assess potential risks of |
---|
373 | 375 | | algorithmic discrimination and implement a risk management policy |
---|
374 | 376 | | to govern the development or deployment of the high-risk artificial |
---|
375 | 377 | | intelligence system. The risk management policy shall: |
---|
376 | 378 | | (1) specify and incorporate the principles and |
---|
377 | 379 | | processes that the developer or deployer uses to identify, |
---|
378 | 380 | | document, and mitigate, in the development or deployment of a |
---|
379 | 381 | | high-risk artificial intelligence system: |
---|
380 | 382 | | (A) known or reasonably foreseeable risks of |
---|
381 | 383 | | algorithmic discrimination; and |
---|
382 | 384 | | (B) prohibited uses and unacceptable risks under |
---|
383 | 385 | | Subchapter B; and |
---|
384 | 386 | | (2) be reasonable in size, scope, and breadth, |
---|
385 | 387 | | considering: |
---|
386 | 388 | | (A) guidance and standards set forth in the most |
---|
387 | 389 | | recent version of the "Artificial Intelligence Risk Management |
---|
388 | 390 | | Framework: Generative Artificial Intelligence Profile" published |
---|
389 | 391 | | by the National Institute of Standards and Technology; |
---|
390 | 392 | | (B) any existing risk management guidance, |
---|
391 | 393 | | standards or framework applicable to artificial intelligence |
---|
392 | 394 | | systems designated by the Banking Commissioner or Insurance |
---|
393 | 395 | | Commissioner, if the developer or deployer is regulated by the |
---|
394 | 396 | | Department of Banking or Department of Insurance; |
---|
395 | 397 | | (C) the size and complexity of the developer or |
---|
396 | 398 | | deployer; |
---|
397 | 399 | | (D) the nature, scope, and intended use of the |
---|
398 | 400 | | high-risk artificial intelligence systems developed or deployed; |
---|
399 | 401 | | and |
---|
400 | 402 | | (E) the sensitivity and volume of personal data |
---|
401 | 403 | | processed in connection with the high-risk artificial intelligence |
---|
402 | 404 | | systems. |
---|
403 | 405 | | (b) A risk management policy implemented pursuant to this |
---|
404 | 406 | | section may apply to more than one high-risk artificial |
---|
405 | 407 | | intelligence system developed or deployed, so long as the developer |
---|
406 | 408 | | or deployer complies with all of the forgoing requirements and |
---|
407 | 409 | | considerations in adopting and implementing the risk management |
---|
408 | 410 | | policy with respect to each high-risk artificial intelligence |
---|
409 | 411 | | system covered by the policy. |
---|
410 | 412 | | (c) A developer or deployer may redact or omit any trade |
---|
411 | 413 | | secrets as defined by Section 541.001(33), Business & Commerce Code |
---|
412 | 414 | | or information protected from disclosure by state or federal law. |
---|
413 | 415 | | Sec. 551.009. RELATIONSHIPS BETWEEN ARTIFICIAL |
---|
414 | 416 | | INTELLIGENCE PARTIES. Any distributor or deployer, shall be |
---|
415 | 417 | | considered to be a developer of a high-risk artificial intelligence |
---|
416 | 418 | | system for the purposes of this chapter and shall be subject to the |
---|
417 | 419 | | obligations and duties of a developer under this chapter in any of |
---|
418 | 420 | | the following circumstances: |
---|
419 | 421 | | (1) they put their name or trademark on a high-risk |
---|
420 | 422 | | artificial intelligence system already placed in the market or put |
---|
421 | 423 | | into service; |
---|
422 | 424 | | (2) they intentionally and substantially modify a |
---|
423 | 425 | | high-risk artificial intelligence system that has already been |
---|
424 | 426 | | placed in the market or has already been put into service in such a |
---|
425 | 427 | | way that it remains a high-risk artificial intelligence system |
---|
426 | 428 | | under this chapter; or |
---|
427 | 429 | | (3) they modify the intended purpose of an artificial |
---|
428 | 430 | | intelligence system which has not previously been classified as |
---|
429 | 431 | | high-risk and has already been placed in the market or put into |
---|
430 | 432 | | service in such a way that the artificial intelligence system |
---|
431 | 433 | | concerned becomes a high-risk artificial intelligence system in |
---|
432 | 434 | | accordance with this chapter of a high-risk artificial intelligence |
---|
433 | 435 | | system. |
---|
434 | 436 | | Sec. 551.010. DIGITAL SERVICE PROVIDER AND SOCIAL MEDIA |
---|
435 | 437 | | PLATFORM DUTIES REGARDING ARTIFICIAL INTELLIGENCE SYSTEMS. A |
---|
436 | 438 | | digital service provider as defined by Section 509.001(2), Business & |
---|
437 | 439 | | Commerce Code or a social media platform as defined by Section |
---|
438 | 440 | | 120.001(1), Business & Commerce Code, shall require advertisers on |
---|
439 | 441 | | the service or platform to agree to terms preventing the deployment |
---|
440 | 442 | | of a high-risk artificial intelligence system on the service or |
---|
441 | 443 | | platform that could expose the users of the service or platform to |
---|
442 | 444 | | algorithmic discrimination or prohibited uses under Subchapter B. |
---|
443 | 445 | | Sec. 551.011. REPORTING REQUIREMENTS. (a) A deployer must |
---|
444 | 446 | | notify, in writing, the council, the attorney general, or the |
---|
445 | 447 | | director of the appropriate state agency that regulates the |
---|
446 | 448 | | deployer's industry, and affected consumers as soon as practicable |
---|
447 | 449 | | after the date on which the deployer discovers or is made aware that |
---|
448 | 450 | | a deployed high-risk artificial intelligence system has caused |
---|
449 | 451 | | algorithmic discrimination of an individual or group of |
---|
450 | 452 | | individuals. |
---|
451 | 453 | | (b) If a developer discovers or is made aware that a |
---|
452 | 454 | | deployed high-risk artificial intelligence system is using inputs |
---|
453 | 455 | | or providing outputs that constitute a violation of Subchapter B, |
---|
454 | 456 | | the deployer must cease operation of the offending system as soon as |
---|
455 | 457 | | technically feasible and provide notice to the council and the |
---|
456 | 458 | | attorney general as soon as practicable and not later than the 10th |
---|
457 | 459 | | day after the date on which the developer discovers or is made aware |
---|
458 | 460 | | of the unacceptable risk. |
---|
459 | 461 | | Sec. 551.012. SANDBOX PROGRAM EXCEPTION. (a) Excluding |
---|
460 | 462 | | violations of Subchapter B, this chapter does not apply to the |
---|
461 | 463 | | development of an artificial intelligence system that is used |
---|
462 | 464 | | exclusively for research, training, testing, or other |
---|
463 | 465 | | pre-deployment activities performed by active participants of the |
---|
464 | 466 | | sandbox program in compliance with Chapter 552. |
---|
465 | 467 | | SUBCHAPTER B. PROHIBITED USES AND UNACCEPTABLE RISK |
---|
466 | 468 | | Sec. 551.051. MANIPULATION OF HUMAN BEHAVIOR TO CIRCUMVENT |
---|
467 | 469 | | INFORMED DECISION-MAKING. An artificial intelligence system shall |
---|
468 | 470 | | not be developed or deployed that uses subliminal techniques beyond |
---|
469 | 471 | | a person's consciousness, or purposefully manipulative or |
---|
470 | 472 | | deceptive techniques, with the objective or the effect of |
---|
471 | 473 | | materially distorting the behavior of a person or a group of persons |
---|
472 | 474 | | by appreciably impairing their ability to make an informed |
---|
473 | 475 | | decision, thereby causing a person to make a decision that the |
---|
474 | 476 | | person would not have otherwise made, in a manner that causes or is |
---|
475 | 477 | | likely to cause significant harm to that person or another person or |
---|
476 | 478 | | group of persons. |
---|
477 | 479 | | Sec. 551.052. SOCIAL SCORING. An artificial intelligence |
---|
478 | 480 | | system shall not be developed or deployed for the evaluation or |
---|
479 | 481 | | classification of natural persons or groups of natural persons |
---|
480 | 482 | | based on their social behavior or known, inferred, or predicted |
---|
481 | 483 | | personal characteristics with the intent to determine a social |
---|
482 | 484 | | score or similar categorical estimation or valuation of a person or |
---|
483 | 485 | | groups of persons. |
---|
484 | 486 | | Sec. 551.053. CAPTURE OF BIOMETRIC IDENTIFIERS USING |
---|
485 | 487 | | ARTIFICIAL INTELLIGENCE. An artificial intelligence system |
---|
486 | 488 | | developed with biometric identifiers of individuals and the |
---|
487 | 489 | | targeted or untargeted gathering of images or other media from the |
---|
488 | 490 | | internet or any other publicly available source shall not be |
---|
489 | 491 | | deployed for the purpose of uniquely identifying a specific |
---|
490 | 492 | | individual. An individual is not considered to be informed nor to |
---|
491 | 493 | | have provided consent for such purpose pursuant to Section 503.001, |
---|
492 | 494 | | Business and Commerce Code, based solely upon the existence on the |
---|
493 | 495 | | internet, or other publicly available source, of an image or other |
---|
494 | 496 | | media containing one or more biometric identifiers. |
---|
495 | 497 | | Sec. 551.054. CATEGORIZATION BASED ON SENSITIVE |
---|
496 | 498 | | ATTRIBUTES. An artificial intelligence system shall not be |
---|
497 | 499 | | developed or deployed with the specific purpose of inferring or |
---|
498 | 500 | | interpreting, sensitive personal attributes of a person or group of |
---|
499 | 501 | | persons using biometric identifiers, except for the labeling or |
---|
500 | 502 | | filtering of lawfully acquired biometric identifier data. |
---|
501 | 503 | | Sec. 551.055. UTILIZATION OF PERSONAL ATTRIBUTES FOR HARM. |
---|
502 | 504 | | An artificial intelligence system shall not utilize |
---|
503 | 505 | | characteristics of a person or a specific group of persons based on |
---|
504 | 506 | | their race, color, disability, religion, sex, national origin, age, |
---|
505 | 507 | | or a specific social or economic situation, with the objective, or |
---|
506 | 508 | | the effect, of materially distorting the behavior of that person or |
---|
507 | 509 | | a person belonging to that group in a manner that causes or is |
---|
508 | 510 | | reasonably likely to cause that person or another person harm. |
---|
509 | 511 | | Sec. 551.056. CERTAIN SEXUALLY EXPLICIT VIDEOS, IMAGES, AND |
---|
510 | 512 | | CHILD PORNOGRAPHY. An artificial intelligence system shall not be |
---|
511 | 513 | | developed or deployed that produces, assists, or aids in producing, |
---|
512 | 514 | | or is capable of producing unlawful visual material in violation of |
---|
513 | 515 | | Section 43.26, Penal Code or an unlawful deep fake video or image in |
---|
514 | 516 | | violation of Section 21.165, Penal Code. |
---|
515 | 517 | | SUBCHAPTER C. ENFORCEMENT AND CONSUMER PROTECTIONS |
---|
516 | 518 | | Sec. 551.101. CONSTRUCTION AND APPLICATION. (a) This |
---|
517 | 519 | | chapter shall be broadly construed and applied to promote its |
---|
518 | 520 | | underlying purposes, which are: |
---|
519 | 521 | | (1) to facilitate and advance the responsible |
---|
520 | 522 | | development and use of artificial intelligence systems; |
---|
521 | 523 | | (2) to protect individuals and groups of individuals |
---|
522 | 524 | | from known, and unknown but reasonably foreseeable, risks, |
---|
523 | 525 | | including unlawful algorithmic discrimination; |
---|
524 | 526 | | (3) to provide transparency regarding those risks in |
---|
525 | 527 | | the development, deployment, or use of artificial intelligence |
---|
526 | 528 | | systems; and |
---|
527 | 529 | | (4) to provide reasonable notice regarding the use or |
---|
528 | 530 | | considered use of artificial intelligence systems by state |
---|
529 | 531 | | agencies. |
---|
530 | 532 | | (b) this chapter does not apply to the developer of an open |
---|
531 | 533 | | source artificial intelligence system, provided that: |
---|
532 | 534 | | (1) the system is not deployed as a high-risk |
---|
533 | 535 | | artificial intelligence system and the developer has taken |
---|
534 | 536 | | reasonable steps to ensure that the system cannot be used as a |
---|
535 | 537 | | high-risk artificial intelligence system without substantial |
---|
536 | 538 | | modifications; and |
---|
537 | 539 | | (2) the weights and technical architecture of the |
---|
538 | 540 | | system are made publicly available. |
---|
539 | 541 | | Sec. 551.102. ENFORCEMENT AUTHORITY. The attorney general |
---|
540 | 542 | | has authority to enforce this chapter. Excluding violations of |
---|
541 | 543 | | Subchapter B, researching, training, testing, or the conducting of |
---|
542 | 544 | | other pre-deployment activities by active participants of the |
---|
543 | 545 | | sandbox program, in compliance with Chapter 552, does not subject a |
---|
544 | 546 | | developer or deployer to penalties or actions. |
---|
545 | 547 | | Sec. 551.103. INTERNET WEBSITE AND COMPLAINT MECHANISM. |
---|
546 | 548 | | The attorney general shall post on the attorney general's Internet |
---|
547 | 549 | | website: |
---|
548 | 550 | | (1) information relating to: |
---|
549 | 551 | | (A) the responsibilities of a developer, |
---|
550 | 552 | | distributor, and deployer under Subchapter A; and |
---|
551 | 553 | | (B) an online mechanism through which a consumer |
---|
552 | 554 | | may submit a complaint under this chapter to the attorney general. |
---|
553 | 555 | | Sec. 551.104. INVESTIGATIVE AUTHORITY. (a) If the |
---|
554 | 556 | | attorney general has reasonable cause to believe that a person has |
---|
555 | 557 | | engaged in or is engaging in a violation of this chapter, the |
---|
556 | 558 | | attorney general may issue a civil investigative demand. The |
---|
557 | 559 | | attorney general shall issue such demands in accordance with and |
---|
558 | 560 | | under the procedures established under Section 15.10. |
---|
559 | 561 | | (b) The attorney general may request, pursuant to a civil |
---|
560 | 562 | | investigative demand issued under Subsection (a), that a developer |
---|
561 | 563 | | or deployer of a high-risk artificial intelligence system disclose |
---|
562 | 564 | | their risk management policy and impact assessments required under |
---|
563 | 565 | | Subchapter A. The attorney general may evaluate the risk |
---|
564 | 566 | | management policy and impact assessments for compliance with the |
---|
565 | 567 | | requirements set forth in Subchapter A. |
---|
566 | 568 | | (c) The attorney general may not institute an action for a |
---|
567 | 569 | | civil penalty against a developer or deployer for artificial |
---|
568 | 570 | | intelligence systems that remain isolated from customer |
---|
569 | 571 | | interaction in a pre-deployment environment. |
---|
570 | 572 | | Sec. 551.105. NOTICE OF VIOLATION OF CHAPTER; OPPORTUNITY |
---|
571 | 573 | | TO CURE. Before bringing an action under Section 551.106, the |
---|
572 | 574 | | attorney general shall notify a developer, distributor, or deployer |
---|
573 | 575 | | in writing, not later than the 30th day before bringing the action, |
---|
574 | 576 | | identifying the specific provisions of this chapter the attorney |
---|
575 | 577 | | general alleges have been or are being violated. The attorney |
---|
576 | 578 | | general may not bring an action against the developer or deployer |
---|
577 | 579 | | if: |
---|
578 | 580 | | (1) within the 30-day period, the developer or |
---|
579 | 581 | | deployer cures the identified violation; and |
---|
580 | 582 | | (2) the developer or deployer provides the attorney |
---|
581 | 583 | | general a written statement that the developer or deployer: |
---|
582 | 584 | | (A) cured the alleged violation; |
---|
583 | 585 | | (B) notified the consumer, if technically |
---|
584 | 586 | | feasible, and the council that the developer or deployer's |
---|
585 | 587 | | violation was addressed, if the consumer's contact information has |
---|
586 | 588 | | been made available to the developer or deployer and the attorney |
---|
587 | 589 | | general; |
---|
588 | 590 | | (C) provided supportive documentation to show |
---|
589 | 591 | | how the violation was cured; and |
---|
590 | 592 | | (D) made changes to internal policies, if |
---|
591 | 593 | | necessary, to reasonably ensure that no such further violations are |
---|
592 | 594 | | likely to occur. |
---|
593 | 595 | | Sec. 551.106. CIVIL PENALTY; INJUNCTION. (a) The attorney |
---|
594 | 596 | | general may bring an action in the name of this state to restrain or |
---|
595 | 597 | | enjoin the person from violating this chapter and seek injunctive |
---|
596 | 598 | | relief. |
---|
597 | 599 | | (b) The attorney general may recover reasonable attorney's |
---|
598 | 600 | | fees and other reasonable expenses incurred in investigating and |
---|
599 | 601 | | bringing an action under this section. |
---|
600 | 602 | | (c) The attorney general may assess and collect an |
---|
601 | 603 | | administrative fine against a developer or deployer who fails to |
---|
602 | 604 | | timely cure a violation or who breaches a written statement |
---|
603 | 605 | | provided to the attorney general, other than those for a prohibited |
---|
604 | 606 | | use, of not less than $50,000 and not more than $100,000 per uncured |
---|
605 | 607 | | violation. |
---|
606 | 608 | | (d) The attorney general may assess and collect an |
---|
607 | 609 | | administrative fine against a developer or deployer who fails to |
---|
608 | 610 | | timely cure a violation of a prohibited use, or whose violation is |
---|
609 | 611 | | determined to be uncurable, of not less than $80,000 and not more |
---|
610 | 612 | | than $200,000 per violation. |
---|
611 | 613 | | (e) A developer or deployer who was found in violation of |
---|
612 | 614 | | and continues to operate with the provisions of this chapter shall |
---|
613 | 615 | | be assessed an administrative fine of not less than $2,000 and not |
---|
614 | 616 | | more than $40,000 per day. |
---|
615 | 617 | | (f) There is a rebuttable presumption that a developer, |
---|
616 | 618 | | distributor, or deployer used reasonable care as required under |
---|
617 | 619 | | this chapter if the developer, distributor, or deployer complied |
---|
618 | 620 | | with their duties under Subchapter A. |
---|
619 | 621 | | Sec. 551.107. ENFORCEMENT ACTIONS BY STATE AGENCIES. A |
---|
620 | 622 | | state agency may sanction an individual licensed, registered, or |
---|
621 | 623 | | certified by that agency for violations of Subchapter B, including: |
---|
622 | 624 | | (1) the suspension, probation, or revocation of a |
---|
623 | 625 | | license, registration, certificate, or other form of permission to |
---|
624 | 626 | | engage in an activity; and |
---|
625 | 627 | | (2) monetary penalties up to $100,000. |
---|
626 | 628 | | Sec. 551.108. CONSUMER RIGHTS AND REMEDIES. A consumer may |
---|
627 | 629 | | appeal a consequential decision made by a high-risk artificial |
---|
628 | 630 | | intelligence system which has an adverse impact on their health, |
---|
629 | 631 | | safety, or fundamental rights, and shall have the right to obtain |
---|
630 | 632 | | from the deployer clear and meaningful explanations of the role of |
---|
631 | 633 | | the high-risk artificial intelligence system in the |
---|
632 | 634 | | decision-making procedure and the main elements of the decision |
---|
633 | 635 | | taken. |
---|
634 | 636 | | SUBCHAPTER D. CONSTRUCTION OF CHAPTER; LOCAL PREEMPTION |
---|
635 | 637 | | Sec. 551.151. CONSTRUCTION OF CHAPTER. This chapter may |
---|
636 | 638 | | not be construed as imposing a requirement on a developer, a |
---|
637 | 639 | | deployer, or other person that adversely affects the rights or |
---|
638 | 640 | | freedoms of any person, including the right of free speech. |
---|
639 | 641 | | Sec. 551.152. LOCAL PREEMPTION. This chapter supersedes |
---|
640 | 642 | | and preempts any ordinance, resolution, rule, or other regulation |
---|
641 | 643 | | adopted by a political subdivision regarding the use of high-risk |
---|
642 | 644 | | artificial intelligence systems. |
---|
643 | 645 | | CHAPTER 552. ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM |
---|
644 | 646 | | SUBCHAPTER A. GENERAL PROVISIONS |
---|
645 | 647 | | Sec. 552.001. DEFINITIONS. In this chapter: |
---|
646 | 648 | | (1) "Applicable agency" means a state agency |
---|
647 | 649 | | responsible for regulating a specific sector impacted by an |
---|
648 | 650 | | artificial intelligence system. |
---|
649 | 651 | | (2) "Consumer" means a person who engages in |
---|
650 | 652 | | transactions involving an artificial intelligence system or is |
---|
651 | 653 | | directly affected by the use of such a system. |
---|
652 | 654 | | (3) "Council" means the Artificial Intelligence |
---|
653 | 655 | | Council established by Chapter 553. |
---|
654 | 656 | | (4) "Department" means the Texas Department of |
---|
655 | 657 | | Information Resources. |
---|
656 | 658 | | (5) "Program participant" means a person or business |
---|
657 | 659 | | entity approved to participate in the sandbox program. |
---|
658 | 660 | | (6) "Sandbox program" means the regulatory framework |
---|
659 | 661 | | established under this chapter that allows temporary testing of |
---|
660 | 662 | | artificial intelligence systems in a controlled, limited manner |
---|
661 | 663 | | without full regulatory compliance. |
---|
662 | 664 | | SUBCHAPTER B. SANDBOX PROGRAM FRAMEWORK |
---|
663 | 665 | | Sec. 552.051. ESTABLISHMENT OF SANDBOX PROGRAM. (a) The |
---|
664 | 666 | | department, in coordination with the council, shall administer the |
---|
665 | 667 | | Artificial Intelligence Regulatory Sandbox Program to facilitate |
---|
666 | 668 | | the development, testing, and deployment of innovative artificial |
---|
667 | 669 | | intelligence systems in Texas. |
---|
668 | 670 | | (b) The sandbox program is designed to: |
---|
669 | 671 | | (1) promote the safe and innovative use of artificial |
---|
670 | 672 | | intelligence across various sectors including healthcare, finance, |
---|
671 | 673 | | education, and public services; |
---|
672 | 674 | | (2) encourage the responsible deployment of |
---|
673 | 675 | | artificial intelligence systems while balancing the need for |
---|
674 | 676 | | consumer protection, privacy, and public safety; and |
---|
675 | 677 | | (3) provide clear guidelines for artificial |
---|
676 | 678 | | intelligence developers to test systems while temporarily exempt |
---|
677 | 679 | | from certain regulatory requirements. |
---|
678 | 680 | | Sec. 552.052. APPLICATION PROCESS. (a) A person or |
---|
679 | 681 | | business entity seeking to participate in the sandbox program must |
---|
680 | 682 | | submit an application to the council. |
---|
681 | 683 | | (b) The application must include: |
---|
682 | 684 | | (1) a detailed description of the artificial |
---|
683 | 685 | | intelligence system and its intended use; |
---|
684 | 686 | | (2) a risk assessment that addresses potential impacts |
---|
685 | 687 | | on consumers, privacy, and public safety; |
---|
686 | 688 | | (3) a plan for mitigating any adverse consequences |
---|
687 | 689 | | during the testing phase; and |
---|
688 | 690 | | (4) proof of compliance with federal artificial |
---|
689 | 691 | | intelligence laws and regulations, where applicable. |
---|
690 | 692 | | Sec. 552.053. DURATION AND SCOPE OF PARTICIPATION. A |
---|
691 | 693 | | participant may test an artificial intelligence system under the |
---|
692 | 694 | | sandbox program for a period of up to 36 months, unless extended by |
---|
693 | 695 | | the department for good cause. |
---|
694 | 696 | | SUBCHAPTER C. OVERSIGHT AND COMPLIANCE |
---|
695 | 697 | | Sec. 552.101. AGENCY COORDINATION. (a) The department |
---|
696 | 698 | | shall coordinate with all relevant state regulatory agencies to |
---|
697 | 699 | | oversee the operations of the sandbox participants. |
---|
698 | 700 | | (b) A relevant agency may recommend to the department that a |
---|
699 | 701 | | participant's sandbox privileges be revoked if the artificial |
---|
700 | 702 | | intelligence system: |
---|
701 | 703 | | (1) poses undue risk to public safety or welfare; |
---|
702 | 704 | | (2) violates any federal or state laws that the |
---|
703 | 705 | | sandbox program cannot override. |
---|
704 | 706 | | Sec. 552.102. REPORTING REQUIREMENTS. (a) Each sandbox |
---|
705 | 707 | | participant must submit quarterly reports to the department, which |
---|
706 | 708 | | shall include: |
---|
707 | 709 | | (1) system performance metrics; |
---|
708 | 710 | | (2) updates on how the system mitigates any risks |
---|
709 | 711 | | associated with its operation; and |
---|
710 | 712 | | (3) feedback from consumers and affected stakeholders |
---|
711 | 713 | | that are using a product that has been deployed from this section. |
---|
712 | 714 | | (b) The department must submit an annual report to the |
---|
713 | 715 | | legislature detailing: |
---|
714 | 716 | | (1) the number of participants in the sandbox program; |
---|
715 | 717 | | (2) the overall performance and impact of artificial |
---|
716 | 718 | | intelligence systems tested within the program; and |
---|
717 | 719 | | (3) recommendations for future legislative or |
---|
718 | 720 | | regulatory reforms. |
---|
719 | 721 | | CHAPTER 553. TEXAS ARTIFICIAL INTELLIGENCE COUNCIL |
---|
720 | 722 | | SUBCHAPTER A. CREATION AND ORGANIZATION OF COUNCIL |
---|
721 | 723 | | Sec. 553.001. CREATION OF COUNCIL. (a) The Artificial |
---|
722 | 724 | | Intelligence Council is administratively attached to the office of |
---|
723 | 725 | | the governor, and the office of the governor shall provide |
---|
724 | 726 | | administrative support to the council as provided by this section. |
---|
725 | 727 | | (b) The office of the governor and the council shall enter |
---|
726 | 728 | | into a memorandum of understanding detailing: |
---|
727 | 729 | | (1) the administrative support the council requires |
---|
728 | 730 | | from the office of the governor to fulfill the purposes of this |
---|
729 | 731 | | chapter; |
---|
730 | 732 | | (2) the reimbursement of administrative expenses to |
---|
731 | 733 | | the office of the governor; and |
---|
732 | 734 | | (3) any other provisions available by law to ensure |
---|
733 | 735 | | the efficient operation of the council as attached to the office of |
---|
734 | 736 | | the governor. |
---|
735 | 737 | | (c) The purpose of the council is to: |
---|
736 | 738 | | (1) ensure artificial intelligence systems are |
---|
737 | 739 | | ethical and in the public's best interest and do not harm public |
---|
738 | 740 | | safety or undermine individual freedoms by finding gaps in the |
---|
739 | 741 | | Penal Code and Chapter 82, Civil Practice and Remedies Code and |
---|
740 | 742 | | making recommendations to the Legislature. |
---|
741 | 743 | | (2) identify existing laws and regulations that impede |
---|
742 | 744 | | innovation in artificial intelligence development and recommend |
---|
743 | 745 | | appropriate reforms; |
---|
744 | 746 | | (3) analyze opportunities to improve the efficiency |
---|
745 | 747 | | and effectiveness of state government operations through the use of |
---|
746 | 748 | | artificial intelligence systems; |
---|
747 | 749 | | (4) investigate and evaluate potential instances of |
---|
748 | 750 | | regulatory capture, including undue influence by technology |
---|
749 | 751 | | companies or disproportionate burdens on smaller innovators; |
---|
750 | 752 | | (5) investigate and evaluate the influence of |
---|
751 | 753 | | technology companies on other companies and determine the existence |
---|
752 | 754 | | or use of tools or processes designed to censor competitors or |
---|
753 | 755 | | users; and |
---|
754 | 756 | | (6) offer guidance and recommendations to state |
---|
755 | 757 | | agencies including advisory opinions on the ethical and legal use |
---|
756 | 758 | | of artificial intelligence; |
---|
757 | 759 | | Sec. 553.002. COUNCIL MEMBERSHIP. (a) The council is |
---|
758 | 760 | | composed of 10 members as follows: |
---|
759 | 761 | | (1) four members of the public appointed by the |
---|
760 | 762 | | governor; |
---|
761 | 763 | | (2) two members of the public appointed by the |
---|
762 | 764 | | lieutenant governor; |
---|
763 | 765 | | (3) two members of the public appointed by the speaker |
---|
764 | 766 | | of the house of representatives; |
---|
765 | 767 | | (4) one senator appointed by the lieutenant governor |
---|
766 | 768 | | as a nonvoting member; and |
---|
767 | 769 | | (5) one member of the house of representatives |
---|
768 | 770 | | appointed by the speaker of the house of representatives as a |
---|
769 | 771 | | nonvoting member. |
---|
770 | 772 | | (b) Voting members of the council serve staggered four-year |
---|
771 | 773 | | terms, with the terms of four members expiring every two years. |
---|
772 | 774 | | (c) The governor shall appoint a chair from among the |
---|
773 | 775 | | members, and the council shall elect a vice chair from its |
---|
774 | 776 | | membership. |
---|
775 | 777 | | (d) The council may establish an advisory board composed of |
---|
776 | 778 | | individuals from the public who possess expertise directly related |
---|
777 | 779 | | to the council's functions, including technical, ethical, |
---|
778 | 780 | | regulatory, and other relevant areas. |
---|
779 | 781 | | Sec. 553.003. QUALIFICATIONS. (a) Members of the council |
---|
780 | 782 | | must be Texas residents and have knowledge or expertise in one or |
---|
781 | 783 | | more of the following areas: |
---|
782 | 784 | | (1) artificial intelligence technologies; |
---|
783 | 785 | | (2) data privacy and security; |
---|
784 | 786 | | (3) ethics in technology or law; |
---|
785 | 787 | | (4) public policy and regulation; or |
---|
786 | 788 | | (5) risk management or safety related to artificial |
---|
787 | 789 | | intelligence systems. |
---|
788 | 790 | | (b) Members must not hold an office or profit under the |
---|
789 | 791 | | state or federal government at the time of appointment. |
---|
790 | 792 | | Sec. 553.004. STAFF AND ADMINISTRATION. The council may |
---|
791 | 793 | | employ an executive director and other personnel as necessary to |
---|
792 | 794 | | perform its duties. |
---|
793 | 795 | | SUBCHAPTER B. POWERS AND DUTIES OF THE COUNCIL |
---|
794 | 796 | | Sec. 553.101. ISSUANCE OF ADVISORY OPINIONS. (a) A state |
---|
795 | 797 | | agency may request a written advisory opinion from the council |
---|
796 | 798 | | regarding the use of artificial intelligence systems in the state. |
---|
797 | 799 | | (b) The council may issue advisory opinions on state use of |
---|
798 | 800 | | artificial intelligence systems regarding: |
---|
799 | 801 | | (1) the compliance of artificial intelligence systems |
---|
800 | 802 | | with Texas law; |
---|
801 | 803 | | (2) the ethical implications of artificial |
---|
802 | 804 | | intelligence deployments in the state; |
---|
803 | 805 | | (3) data privacy and security concerns related to |
---|
804 | 806 | | artificial intelligence systems; or |
---|
805 | 807 | | (4) potential liability or legal risks associated with |
---|
806 | 808 | | the use of AI. |
---|
807 | 809 | | Sec. 553.102. RULEMAKING AUTHORITY. (a) The council may |
---|
808 | 810 | | adopt rules necessary to administer its duties under this chapter, |
---|
809 | 811 | | including: |
---|
810 | 812 | | (1) procedures for requesting advisory opinions; |
---|
811 | 813 | | (2) standards for ethical artificial intelligence |
---|
812 | 814 | | development and deployment; |
---|
813 | 815 | | (3) guidelines for evaluating the safety, privacy, and |
---|
814 | 816 | | fairness of artificial intelligence systems. |
---|
815 | 817 | | (b) The council's rules shall align with state laws on |
---|
816 | 818 | | artificial intelligence, technology, data security, and consumer |
---|
817 | 819 | | protection. |
---|
818 | 820 | | Sec. 553.103. TRAINING AND EDUCATIONAL OUTREACH. The |
---|
819 | 821 | | council shall conduct training programs for state agencies and |
---|
820 | 822 | | local governments on the ethical use of artificial intelligence |
---|
821 | 823 | | systems. |
---|
822 | 824 | | SECTION 3. Section 503.001, Business & Commerce Code is |
---|
823 | 825 | | amended by adding Subsection (c-3) to read as follows: |
---|
824 | 826 | | (c-3) This section does not apply to the training, |
---|
825 | 827 | | processing, or storage of biometric identifiers involved in machine |
---|
826 | 828 | | learning or artificial intelligence systems, unless performed for |
---|
827 | 829 | | the purpose of uniquely identifying a specific individual. If a |
---|
828 | 830 | | biometric identifier captured for the purpose of training an |
---|
829 | 831 | | artificial intelligence system is subsequently used for a |
---|
830 | 832 | | commercial purpose, the person possessing the biometric identifier |
---|
831 | 833 | | is subject to this section's provisions for the possession and |
---|
832 | 834 | | destruction of a biometric identifier and the associated penalties. |
---|
833 | 835 | | SECTION 4. Sections 541.051(b), 541.101(a), 541.102(a), |
---|
834 | 836 | | and Sec.541.104(a), Business & Commerce Code, are amended to read |
---|
835 | 837 | | as follows: |
---|
836 | 838 | | Sec. 541.051. CONSUMER'S PERSONAL DATA RIGHTS; REQUEST TO |
---|
837 | 839 | | EXERCISE RIGHTS. (a) A consumer is entitled to exercise the |
---|
838 | 840 | | consumer rights authorized by this section at any time by |
---|
839 | 841 | | submitting a request to a controller specifying the consumer rights |
---|
840 | 842 | | the consumer wishes to exercise. With respect to the processing of |
---|
841 | 843 | | personal data belonging to a known child, a parent or legal guardian |
---|
842 | 844 | | of the child may exercise the consumer rights on behalf of the |
---|
843 | 845 | | child. |
---|
844 | 846 | | (b) A controller shall comply with an authenticated |
---|
845 | 847 | | consumer request to exercise the right to: |
---|
846 | 848 | | (1) confirm whether a controller is processing the |
---|
847 | 849 | | consumer's personal data and to access the personal data; |
---|
848 | 850 | | (2) correct inaccuracies in the consumer's personal |
---|
849 | 851 | | data, taking into account the nature of the personal data and the |
---|
850 | 852 | | purposes of the processing of the consumer's personal data; |
---|
851 | 853 | | (3) delete personal data provided by or obtained about |
---|
852 | 854 | | the consumer; |
---|
853 | 855 | | (4) if the data is available in a digital format, |
---|
854 | 856 | | obtain a copy of the consumer's personal data that the consumer |
---|
855 | 857 | | previously provided to the controller in a portable and, to the |
---|
856 | 858 | | extent technically feasible, readily usable format that allows the |
---|
857 | 859 | | consumer to transmit the data to another controller without |
---|
858 | 860 | | hindrance; [or] |
---|
859 | 861 | | (5) know if the consumer's personal data is or will be |
---|
860 | 862 | | used in any artificial intelligence system and for what purposes; |
---|
861 | 863 | | or |
---|
862 | 864 | | ([5]6) opt out of the processing of the personal data |
---|
863 | 865 | | for purposes of: |
---|
864 | 866 | | (A) targeted advertising; |
---|
865 | 867 | | (B) the sale of personal data; [or] |
---|
866 | 868 | | (C) the sale of personal data for use in |
---|
867 | 869 | | artificial intelligence systems prior to being collected; or |
---|
868 | 870 | | ([C]D) profiling in furtherance of a decision |
---|
869 | 871 | | that produces a legal or similarly significant effect concerning |
---|
870 | 872 | | the consumer. |
---|
871 | 873 | | Sec. 541.101. CONTROLLER DUTIES; TRANSPARENCY. (a) A |
---|
872 | 874 | | controller: |
---|
873 | 875 | | (1) shall limit the collection of personal data to |
---|
874 | 876 | | what is adequate, relevant, and reasonably necessary in relation to |
---|
875 | 877 | | the purposes for which that personal data is processed, as |
---|
876 | 878 | | disclosed to the consumer; [and] |
---|
877 | 879 | | (2) for purposes of protecting the confidentiality, |
---|
878 | 880 | | integrity, and accessibility of personal data, shall establish, |
---|
879 | 881 | | implement, and maintain reasonable administrative, technical, and |
---|
880 | 882 | | physical data security practices that are appropriate to the volume |
---|
881 | 883 | | and nature of the personal data at issue.; and |
---|
882 | 884 | | (3) for purposes of protecting the unauthorized |
---|
883 | 885 | | access, disclosure, alteration, or destruction of data collected, |
---|
884 | 886 | | stored, and processed by artificial intelligence systems, shall |
---|
885 | 887 | | establish, implement, and maintain, reasonable administrative, |
---|
886 | 888 | | technical, and physical data security practices that are |
---|
887 | 889 | | appropriate to the volume and nature of the data collected, stored, |
---|
888 | 890 | | and processed by artificial intelligence systems. |
---|
889 | 891 | | Sec.541.102. PRIVACY NOTICE. (a) A controller shall |
---|
890 | 892 | | provide consumers with a reasonably accessible and clear privacy |
---|
891 | 893 | | notice that includes: |
---|
892 | 894 | | (1) the categories of personal data processed by the |
---|
893 | 895 | | controller, including, if applicable, any sensitive data processed |
---|
894 | 896 | | by the controller; |
---|
895 | 897 | | (2) the purpose for processing personal data; |
---|
896 | 898 | | (3) how consumers may exercise their consumer rights |
---|
897 | 899 | | under Subchapter B, including the process by which a consumer may |
---|
898 | 900 | | appeal a controller's decision with regard to the consumer's |
---|
899 | 901 | | request; |
---|
900 | 902 | | (4) if applicable, the categories of personal data |
---|
901 | 903 | | that the controller shares with third parties; |
---|
902 | 904 | | (5) if applicable, the categories of third parties |
---|
903 | 905 | | with whom the controller shares personal data; [and] |
---|
904 | 906 | | (6) if applicable, an acknowledgement of the |
---|
905 | 907 | | collection, use, and sharing of personal data for artificial |
---|
906 | 908 | | intelligence purposes; and |
---|
907 | 909 | | ([6]7) a description of the methods required under |
---|
908 | 910 | | Section 541.055 through which consumers can submit requests to |
---|
909 | 911 | | exercise their consumer rights under this chapter. |
---|
910 | 912 | | Sec. 541.104. DUTIES OF PROCESSOR. (a) A processor shall |
---|
911 | 913 | | adhere to the instructions of a controller and shall assist the |
---|
912 | 914 | | controller in meeting or complying with the controller's duties or |
---|
913 | 915 | | requirements under this chapter, including: |
---|
914 | 916 | | (1) assisting the controller in responding to consumer |
---|
915 | 917 | | rights requests submitted under Section 541.051 by using |
---|
916 | 918 | | appropriate technical and organizational measures, as reasonably |
---|
917 | 919 | | practicable, taking into account the nature of processing and the |
---|
918 | 920 | | information available to the processor; |
---|
919 | 921 | | (2) assisting the controller with regard to complying |
---|
920 | 922 | | with the [requirement]requirements relating to the security of |
---|
921 | 923 | | processing personal data, and if applicable, the data collected, |
---|
922 | 924 | | stored, and processed by artificial intelligence systems and to the |
---|
923 | 925 | | notification of a breach of security of the processor's system |
---|
924 | 926 | | under Chapter 521, taking into account the nature of processing and |
---|
925 | 927 | | the information available to the processor; and |
---|
926 | 928 | | (3) providing necessary information to enable the |
---|
927 | 929 | | controller to conduct and document data protection assessments |
---|
928 | 930 | | under Section 541.105. |
---|
929 | 931 | | SECTION 5. Subtitle E, Title 4, Labor Code, is amended by |
---|
930 | 932 | | adding Chapter 319 to read as follows: |
---|
931 | 933 | | CHAPTER 319. TEXAS ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT |
---|
932 | 934 | | GRANT PROGRAM |
---|
933 | 935 | | SUBCHAPTER A. GENERAL PROVISIONS |
---|
934 | 936 | | Sec. 319.001. DEFINITIONS. In this chapter: |
---|
935 | 937 | | (1) "Artificial intelligence industry" means |
---|
936 | 938 | | businesses, research organizations, governmental entities, and |
---|
937 | 939 | | educational institutions engaged in the development, deployment, |
---|
938 | 940 | | or use of artificial intelligence technologies in Texas. |
---|
939 | 941 | | (2) "Commission" means the Texas Workforce |
---|
940 | 942 | | Commission. |
---|
941 | 943 | | (3) "Eligible entity" means Texas-based businesses in |
---|
942 | 944 | | the artificial intelligence industry, public school districts, |
---|
943 | 945 | | community colleges, public technical institutes, and workforce |
---|
944 | 946 | | development organizations. |
---|
945 | 947 | | (4) "Program" means the Texas Artificial Intelligence |
---|
946 | 948 | | Workforce Development Grant Program established under this |
---|
947 | 949 | | chapter. |
---|
948 | 950 | | SUBCHAPTER B. ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT GRANT |
---|
949 | 951 | | PROGRAM |
---|
950 | 952 | | Sec. 319.051. ESTABLISHMENT OF GRANT PROGRAM. (a) The |
---|
951 | 953 | | commission shall establish the Texas Artificial Intelligence |
---|
952 | 954 | | Workforce Development Grant Program to: |
---|
953 | 955 | | (1) support and assist Texas-based artificial |
---|
954 | 956 | | intelligence companies in developing a skilled workforce; |
---|
955 | 957 | | (2) provide grants to local community colleges and |
---|
956 | 958 | | public high schools to implement or expand career and technical |
---|
957 | 959 | | education programs focused on artificial intelligence readiness |
---|
958 | 960 | | and skill development; and |
---|
959 | 961 | | (3) offer opportunities to retrain and reskill workers |
---|
960 | 962 | | through partnerships with the artificial intelligence industry and |
---|
961 | 963 | | workforce development programs. |
---|
962 | 964 | | (b) The program is intended to: |
---|
963 | 965 | | (1) prepare Texas workers and students for employment |
---|
964 | 966 | | in the rapidly growing artificial intelligence industry; |
---|
965 | 967 | | (2) support the creation of postsecondary programs and |
---|
966 | 968 | | certifications relevant to current artificial intelligence |
---|
967 | 969 | | opportunities; |
---|
968 | 970 | | (3) ensure that Texas maintains a competitive edge in |
---|
969 | 971 | | artificial intelligence innovation and workforce development; and |
---|
970 | 972 | | (4) address workforce gaps in artificial |
---|
971 | 973 | | intelligence-related fields, including data science, |
---|
972 | 974 | | cybersecurity, machine learning, robotics, and automation. |
---|
973 | 975 | | (c) The commission shall adopt rules necessary to implement |
---|
974 | 976 | | this subchapter. |
---|
975 | 977 | | Sec. 319.052. FEDERAL FUNDS AND GIFTS, GRANTS, AND |
---|
976 | 978 | | DONATIONS. |
---|
977 | 979 | | In addition to other money appropriated by the legislature, |
---|
978 | 980 | | for the purpose of providing artificial intelligence workforce |
---|
979 | 981 | | opportunities under the program established under this subchapter |
---|
980 | 982 | | the commission may: |
---|
981 | 983 | | (1) seek and apply for any available federal funds; |
---|
982 | 984 | | and |
---|
983 | 985 | | (2) solicit and accept gifts, grants, and donations |
---|
984 | 986 | | from any other source, public or private, as necessary to ensure |
---|
985 | 987 | | effective implementation of the program. |
---|
986 | 988 | | Sec. 319.053. ELIGIBILITY FOR GRANTS. (a) The following |
---|
987 | 989 | | entities are eligible to apply for grants under this program: |
---|
988 | 990 | | (1) Texas-based businesses engaged in the development |
---|
989 | 991 | | or deployment of artificial intelligence technologies; |
---|
990 | 992 | | (2) public school districts and charter schools |
---|
991 | 993 | | offering or seeking to offer career and technical education |
---|
992 | 994 | | programs in artificial intelligence-related fields or to update |
---|
993 | 995 | | existing curricula to address these fields; |
---|
994 | 996 | | (3) public community colleges and technical |
---|
995 | 997 | | institutes that develop artificial intelligence-related curricula |
---|
996 | 998 | | or training programs or update existing curricula or training |
---|
997 | 999 | | programs to incorporate artificial intelligence training; and |
---|
998 | 1000 | | (4) workforce development organizations in |
---|
999 | 1001 | | partnership with artificial intelligence companies to reskill and |
---|
1000 | 1002 | | retrain workers in artificial intelligence competencies. |
---|
1001 | 1003 | | (b) To be eligible, the entity must: |
---|
1002 | 1004 | | (1) submit an application to the commission in the |
---|
1003 | 1005 | | form and manner prescribed by the commission; and |
---|
1004 | 1006 | | (2) demonstrate the capacity to develop and implement |
---|
1005 | 1007 | | training, educational, or workforce development programs that |
---|
1006 | 1008 | | align with the needs of the artificial intelligence industry in |
---|
1007 | 1009 | | Texas and lead to knowledge, skills, and work-based experiences |
---|
1008 | 1010 | | that are transferable to similar employment opportunities in the |
---|
1009 | 1011 | | artificial intelligence industry. |
---|
1010 | 1012 | | Sec. 319.054. USE OF GRANTS. (a) Grants awarded under the |
---|
1011 | 1013 | | program may be used for: |
---|
1012 | 1014 | | (1) developing or expanding workforce training |
---|
1013 | 1015 | | programs for artificial intelligence-related skills, including but |
---|
1014 | 1016 | | not limited to machine learning, data analysis, software |
---|
1015 | 1017 | | development, and robotics; |
---|
1016 | 1018 | | (2) creating or enhancing career and technical |
---|
1017 | 1019 | | education programs in artificial intelligence for high school |
---|
1018 | 1020 | | students, with a focus on preparing them for careers in artificial |
---|
1019 | 1021 | | intelligence or related fields; |
---|
1020 | 1022 | | (3) providing financial support for instructors, |
---|
1021 | 1023 | | equipment, and technology necessary for artificial |
---|
1022 | 1024 | | intelligence-related workforce training; |
---|
1023 | 1025 | | (4) partnering with local businesses to develop |
---|
1024 | 1026 | | internship programs, on-the-job training opportunities, instructor |
---|
1025 | 1027 | | externships, and apprenticeships in the artificial intelligence |
---|
1026 | 1028 | | industry; |
---|
1027 | 1029 | | (5) funding scholarships or stipends for students, |
---|
1028 | 1030 | | instructors, and workers participating in artificial intelligence |
---|
1029 | 1031 | | training programs, particularly for individuals from underserved |
---|
1030 | 1032 | | or underrepresented communities; or |
---|
1031 | 1033 | | (6) reskilling and retraining workers displaced by |
---|
1032 | 1034 | | technological changes or job automation, with an emphasis on |
---|
1033 | 1035 | | artificial intelligence-related job roles. |
---|
1034 | 1036 | | (b) The commission shall prioritize funding for: |
---|
1035 | 1037 | | (1) initiatives that partner with rural and |
---|
1036 | 1038 | | underserved communities to promote artificial intelligence |
---|
1037 | 1039 | | education and career pathways; |
---|
1038 | 1040 | | (2) programs that lead to credentials of value in |
---|
1039 | 1041 | | artificial intelligence or related fields; and |
---|
1040 | 1042 | | (3) proposals that include partnerships between the |
---|
1041 | 1043 | | artificial intelligence industry, a public or private institution |
---|
1042 | 1044 | | of higher education in this state, and workforce development |
---|
1043 | 1045 | | organizations. |
---|
1044 | 1046 | | SECTION 6. Section 325.011, Government Code, is amended to |
---|
1045 | 1047 | | read as follows: |
---|
1046 | 1048 | | Sec. 325.011. CRITERIA FOR REVIEW. The commission and its |
---|
1047 | 1049 | | staff shall consider the following criteria in determining whether |
---|
1048 | 1050 | | a public need exists for the continuation of a state agency or its |
---|
1049 | 1051 | | advisory committees or for the performance of the functions of the |
---|
1050 | 1052 | | agency or its advisory committees: |
---|
1051 | 1053 | | (1) the efficiency and effectiveness with which the |
---|
1052 | 1054 | | agency or the advisory committee operates; |
---|
1053 | 1055 | | (2)(A) an identification of the mission, goals, and |
---|
1054 | 1056 | | objectives intended for the agency or advisory committee and of the |
---|
1055 | 1057 | | problem or need that the agency or advisory committee was intended |
---|
1056 | 1058 | | to address; and |
---|
1057 | 1059 | | (B) the extent to which the mission, goals, and |
---|
1058 | 1060 | | objectives have been achieved and the problem or need has been |
---|
1059 | 1061 | | addressed; |
---|
1060 | 1062 | | (3)(A) an identification of any activities of the |
---|
1061 | 1063 | | agency in addition to those granted by statute and of the authority |
---|
1062 | 1064 | | for those activities; and |
---|
1063 | 1065 | | (B) the extent to which those activities are |
---|
1064 | 1066 | | needed; |
---|
1065 | 1067 | | (4) an assessment of authority of the agency relating |
---|
1066 | 1068 | | to fees, inspections, enforcement, and penalties; |
---|
1067 | 1069 | | (5) whether less restrictive or alternative methods of |
---|
1068 | 1070 | | performing any function that the agency performs could adequately |
---|
1069 | 1071 | | protect or provide service to the public; |
---|
1070 | 1072 | | (6) the extent to which the jurisdiction of the agency |
---|
1071 | 1073 | | and the programs administered by the agency overlap or duplicate |
---|
1072 | 1074 | | those of other agencies, the extent to which the agency coordinates |
---|
1073 | 1075 | | with those agencies, and the extent to which the programs |
---|
1074 | 1076 | | administered by the agency can be consolidated with the programs of |
---|
1075 | 1077 | | other state agencies; |
---|
1076 | 1078 | | (7) the promptness and effectiveness with which the |
---|
1077 | 1079 | | agency addresses complaints concerning entities or other persons |
---|
1078 | 1080 | | affected by the agency, including an assessment of the agency's |
---|
1079 | 1081 | | administrative hearings process; |
---|
1080 | 1082 | | (8) an assessment of the agency's rulemaking process |
---|
1081 | 1083 | | and the extent to which the agency has encouraged participation by |
---|
1082 | 1084 | | the public in making its rules and decisions and the extent to which |
---|
1083 | 1085 | | the public participation has resulted in rules that benefit the |
---|
1084 | 1086 | | public; |
---|
1085 | 1087 | | (9) the extent to which the agency has complied with: |
---|
1086 | 1088 | | (A) federal and state laws and applicable rules |
---|
1087 | 1089 | | regarding equality of employment opportunity and the rights and |
---|
1088 | 1090 | | privacy of individuals; and |
---|
1089 | 1091 | | (B) state law and applicable rules of any state |
---|
1090 | 1092 | | agency regarding purchasing guidelines and programs for |
---|
1091 | 1093 | | historically underutilized businesses; |
---|
1092 | 1094 | | (10) the extent to which the agency issues and |
---|
1093 | 1095 | | enforces rules relating to potential conflicts of interest of its |
---|
1094 | 1096 | | employees; |
---|
1095 | 1097 | | (11) the extent to which the agency complies with |
---|
1096 | 1098 | | Chapters 551 and 552 and follows records management practices that |
---|
1097 | 1099 | | enable the agency to respond efficiently to requests for public |
---|
1098 | 1100 | | information; |
---|
1099 | 1101 | | (12) the effect of federal intervention or loss of |
---|
1100 | 1102 | | federal funds if the agency is abolished; |
---|
1101 | 1103 | | (13) the extent to which the purpose and effectiveness |
---|
1102 | 1104 | | of reporting requirements imposed on the agency justifies the |
---|
1103 | 1105 | | continuation of the requirement; [and] |
---|
1104 | 1106 | | (14) an assessment of the agency's cybersecurity |
---|
1105 | 1107 | | practices using confidential information available from the |
---|
1106 | 1108 | | Department of Information Resources or any other appropriate state |
---|
1107 | 1109 | | agency; and |
---|
1108 | 1110 | | (15) an assessment, using information available from |
---|
1109 | 1111 | | the Department of Information Resources, the Attorney General, or |
---|
1110 | 1112 | | any other appropriate state agency, of the agency's use of |
---|
1111 | 1113 | | artificial intelligence systems, high-risk artificial intelligence |
---|
1112 | 1114 | | systems, in its operations and its oversight of the use of |
---|
1113 | 1115 | | artificial intelligence systems by entities or persons under the |
---|
1114 | 1116 | | agency's jurisdiction, and any related impact on the agency's |
---|
1115 | 1117 | | ability to achieve its mission, goals, and objectives. |
---|
1116 | 1118 | | SECTION 7. Section 2054.068(b), Government Code, is amended |
---|
1117 | 1119 | | to read as follows: |
---|
1118 | 1120 | | (b) The department shall collect from each state agency |
---|
1119 | 1121 | | information on the status and condition of the agency's information |
---|
1120 | 1122 | | technology infrastructure, including information regarding: |
---|
1121 | 1123 | | (1) the agency's information security program; |
---|
1122 | 1124 | | (2) an inventory of the agency's servers, mainframes, |
---|
1123 | 1125 | | cloud services, and other information technology equipment; |
---|
1124 | 1126 | | (3) identification of vendors that operate and manage |
---|
1125 | 1127 | | the agency's information technology infrastructure; [and] |
---|
1126 | 1128 | | (4) any additional related information requested by |
---|
1127 | 1129 | | the department; and |
---|
1128 | 1130 | | (5) an evaluation of the use, or considered use, of |
---|
1129 | 1131 | | artificial intelligence systems and high-risk artificial |
---|
1130 | 1132 | | intelligence systems by each state agency. |
---|
1131 | 1133 | | SECTION 8. Section 2054.0965(b), Government Code, is |
---|
1132 | 1134 | | amended to read as follows: |
---|
1133 | 1135 | | Sec. 2054.0965. INFORMATION RESOURCES DEPLOYMENT REVIEW. |
---|
1134 | 1136 | | (b) Except as otherwise modified by rules adopted by the |
---|
1135 | 1137 | | department, the review must include: |
---|
1136 | 1138 | | (1) an inventory of the agency's major information |
---|
1137 | 1139 | | systems, as defined by Section 2054.008, and other operational or |
---|
1138 | 1140 | | logistical components related to deployment of information |
---|
1139 | 1141 | | resources as prescribed by the department; |
---|
1140 | 1142 | | (2) an inventory of the agency's major databases, |
---|
1141 | 1143 | | artificial intelligence systems, and applications; |
---|
1142 | 1144 | | (3) a description of the agency's existing and planned |
---|
1143 | 1145 | | telecommunications network configuration; |
---|
1144 | 1146 | | (4) an analysis of how information systems, |
---|
1145 | 1147 | | components, databases, applications, and other information |
---|
1146 | 1148 | | resources have been deployed by the agency in support of: |
---|
1147 | 1149 | | (A) applicable achievement goals established |
---|
1148 | 1150 | | under Section 2056.006 and the state strategic plan adopted under |
---|
1149 | 1151 | | Section 2056.009; |
---|
1150 | 1152 | | (B) the state strategic plan for information |
---|
1151 | 1153 | | resources; and |
---|
1152 | 1154 | | (C) the agency's business objectives, mission, |
---|
1153 | 1155 | | and goals; |
---|
1154 | 1156 | | (5) agency information necessary to support the state |
---|
1155 | 1157 | | goals for interoperability and reuse; and |
---|
1156 | 1158 | | (6) confirmation by the agency of compliance with |
---|
1157 | 1159 | | state statutes, rules, and standards relating to information |
---|
1158 | 1160 | | resources. |
---|
1159 | 1161 | | SECTION 9. Not later than September 1, 2025, the attorney |
---|
1160 | 1162 | | general shall post on the attorney general's Internet website the |
---|
1161 | 1163 | | information and online mechanism required by Section 551.041, |
---|
1162 | 1164 | | Business & Commerce Code, as added by this Act. |
---|
1163 | 1165 | | SECTION 10. This Act takes effect September 1, 2025. |
---|