Maryland 2025 2025 Regular Session

Maryland Senate Bill SB936 Introduced / Bill

Filed 02/05/2025

                     
 
EXPLANATION: CAPITALS INDICATE MAT TER ADDED TO EXISTIN G LAW. 
        [Brackets] indicate matter deleted from existing law. 
          *sb0936*  
  
SENATE BILL 936 
I3, S1   	5lr0853 
    	CF 5lr3153 
By: Senators Hester, Gile, and Love 
Introduced and read first time: January 28, 2025 
Assigned to: Finance 
 
A BILL ENTITLED 
 
AN ACT concerning 1 
 
Consumer Protection – High–Risk Artificial Intelligence – Developer and 2 
Deployer Requirements 3 
 
FOR the purpose of requiring a certain developer of and a certain deployer who uses a 4 
certain high–risk artificial intelligence system to use reasonable care to protect 5 
consumers from known and reasonably foreseeable risks of certain algorithmic 6 
discrimination; prohibiting a developer from providing to a certain deployer or other 7 
developer a high–risk artificial intelligence system unless certain disclosures are 8 
provided to the deployer or developer; requiring a developer to make certain 9 
documentation and information available to complete an impact assessment in a 10 
certain manner; requiring a deployer to design, implement, and maintain a risk 11 
management policy and program for the high–risk artificial intelligence system in 12 
use by the deployer; requiring a deployer to complete an impact assessment of its 13 
high–risk artificial intelligence system; requiring a deployer to provide certain 14 
information to a consumer regarding the deployment of and decisions made by a 15 
high–risk artificial intelligence system; requiring a deployer to provide consumers 16 
with an opportunity to correct certain information and appeal a certain 17 
consequential decision; authorizing the Attorney General to enforce this Act; 18 
authorizing a consumer to bring a civil action against a deployer under certain 19 
circumstances; and generally relating to the use of high–risk artificial intelligence 20 
systems in the State.  21 
 
BY adding to 22 
 Article – Commercial Law 23 
Section 14–47A–01 through 14–47A–08 to be under the new subtitle “Subtitle 47A. 24 
High–Risk Artificial Intelligence Developer Act” 25 
 Annotated Code of Maryland 26 
 (2013 Replacement Volume and 2024 Supplement) 27 
 
 SECTION 1. BE IT ENACTED BY THE GENERAL ASSEMBLY OF MARYLAND, 28 
That the Laws of Maryland read as follows: 29  2 	SENATE BILL 936  
 
 
 
Article – Commercial Law 1 
 
SUBTITLE 47A. HIGH–RISK ARTIFICIAL INTELLIGENCE DEVELOPER ACT. 2 
 
14–47A–01. 3 
 
 (A) IN THIS SUBTITLE THE FOLLOWING WORDS HAVE THE MEANINGS 4 
INDICATED. 5 
 
 (B) (1) “ALGORITHMIC DISCRIMIN ATION” MEANS THE USE OF AN 6 
ARTIFICIAL INTELLIGE NCE SYSTEM THAT RESULTS IN AN UNLAWF UL DIFFERENTIAL 7 
TREATMENT OR IMPACT THAT DISFAVORS AN IN DIVIDUAL OR GROUP OF 8 
INDIVIDUALS ON THE BASIS OF THE INDIVIDUAL’S OR GROUP ’S ACTUAL OR 9 
PERCEIVED: 10 
 
 (I) AGE;  11 
 
 (II) COLOR; 12 
 
 (III) DISABILITY; 13 
 
 (IV) ETHNICITY; 14 
 
 (V) GENETIC INFORMATION ; 15 
 
 (VI) LIMITED PROFICIENCY I N THE ENGLISH LANGUAGE ; 16 
 
 (VII) NATIONAL ORIGIN ; 17 
 
 (VIII) RACE; 18 
 
 (IX) RELIGION; 19 
 
 (X) REPRODUCTIVE HEALTH ; 20 
 
 (XI) SEX; 21 
 
 (XII) SEXUAL ORIENTATION ; 22 
 
 (XIII) VETERAN STATUS ; OR  23 
 
 (XIV) CLASSIFICATION OTHERWISE PROTECTED UNDER STATE 24 
OR FEDERAL LAW .  25   	SENATE BILL 936 	3 
 
 
 
 (2) “ALGORITHMIC DISCRIMIN ATION” DOES NOT INCLUDE :  1 
 
 (I) THE OFFER, LICENSE, OR USE OF A HIGH –RISK ARTIFICIAL 2 
INTELLIGENCE SYSTEM BY A DEVELOPER OR DE PLOYER FOR THE SOLE PURPOSE OF 3 
THE DEVELOPER ’S OR DEPLOYER ’S SELF–TESTING TO IDENTIFY , MITIGATE, OR 4 
PREVENT DISCRIMINATION OR OT HERWISE ENSURE COMPL IANCE WITH STATE AND 5 
FEDERAL LAW ;  6 
 
 (II) THE EXPANSION OF AN A PPLICANT, A CUSTOMER, OR A 7 
PARTICIPANT POOL TO INCREASE DIVERSITY O R REDRESS HISTORICAL 8 
DISCRIMINATION ; OR  9 
 
 (III) AN ACT OR OMISSION BY OR ON BEHALF OF A PRIVATE CLUB 10 
OR OTHER ESTABLISHME NT NOT IN FACT OPEN TO THE PUBLIC , AS PROVIDED BY 11 
TITLE II OF THE CIVIL RIGHTS ACT OF 1964 UNDER 42 U.S.C. § 2000A(E). 12 
 
 (C) (1) “ARTIFICIAL INTELLIGEN CE SYSTEM ” MEANS A MACHINE  13 
LEARNING–BASED SYSTEM THAT FO R ANY EXPLICIT OR IM PLICIT OBJECTIVE , 14 
INFERS FROM THE INPU TS THE SYSTEM RECEIVES HOW TO GENERATE OUTPUTS , 15 
INCLUDING CONTENT , DECISIONS, PREDICTIONS, OR RECOMMENDATIONS T HAT CAN 16 
INFLUENCE PHYSICAL O R VIRTUAL ENVIRONMEN TS. 17 
 
 (2) “ARTIFICIAL INTELLIGENCE SYS TEM” DOES NOT INCLUDE AN 18 
ARTIFICIAL INTELLIGE NCE SYSTEM OR A MODEL THAT IS USED F OR DEVELOPMENT , 19 
PROTOTYPING , AND RESEARCH ACTIVIT IES BEFORE THE ARTIF ICIAL INTELLIGENCE 20 
SYSTEM OR MODEL IS RELEASED ON THE MARKET . 21 
 
 (D) “CONSEQUENTIAL DECISION” MEANS A DECISION THA T HAS A 22 
MATERIALLY LEGAL OR SIMILARLY S IGNIFICANT EFFECT ON THE PROVISION OR 23 
DENIAL TO A CONSUMER OF:  24 
 
 (1) PAROLE, PROBATION, A PARDON, OR ANY OTHER RELEASE FROM 25 
INCARCERATION OR COU RT SUPERVISION ; 26 
 
 (2) EDUCATION ENROLLME NT OR EDUCATION OPPO RTUNITY;  27 
 
 (3) ACCESS TO OR PROVISIO N OF EMPLOYMENT ; 28 
 
 (4) FINANCIAL OR LENDING SERVICES;  29 
 
 (5) ACCESS TO OR THE PROV ISION OF HEALTH CARE SERVICE S;  30 
  4 	SENATE BILL 936  
 
 
 (6) HOUSING;  1 
 
 (7) INSURANCE; 2 
 
 (8) MARITAL STATUS; OR 3 
 
 (9) LEGAL SERVICE. 4 
 
 (E) (1) “CONSUMER” MEANS AN INDIVIDUAL WHO IS A RESIDENT OF TH E 5 
STATE AND IS ACTING ONLY IN A PERSONAL OR HOUSEHOLD CONTEXT . 6 
 
 (2) “CONSUMER” DOES NOT INCLUDE A N INDIVIDUAL ACTING IN A 7 
COMMERCIAL OR EMPLOY MENT CONTEXT . 8 
 
 (F) “DEPLOYER” MEANS A PERSON DOING BUSINESS IN TH E STATE THAT 9 
DEPLOYS OR USES A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM TO MAKE A 10 
CONSEQUENTIAL DECISI ON IN THE STATE. 11 
 
 (G) “DEVELOPER” MEANS A PERSON DOING BUSINESS IN TH E STATE THAT 12 
DEVELOPS OR INTENTIO NALLY AND SUBSTANTIALLY MODIFI ES A HIGH–RISK 13 
ARTIFICIAL INTELLIGE NCE SYSTEM THAT IS OFFERED , SOLD, LEASED, GIVEN, OR 14 
OTHERWISE PROVIDED T O CONSUMERS IN THE STATE. 15 
 
 (H) (1) “GENERAL–PURPOSE ARTIFICIAL I NTELLIGENCE MODEL ” MEANS 16 
A MODEL USED BY AN ARTIFICIAL INTELLIGE NCE SYSTEM THAT : 17 
 
 (I) DISPLAYS SIGNIFICANT GENERALITY;  18 
 
 (II) IS CAPABLE OF COMPETE NTLY PERFORMING A WI DE RANGE 19 
OF DISTINCT TASKS ; AND  20 
 
 (III) CAN BE INTEGRATED INT O A VARIETY OF DOWNS TREAM 21 
APPLICATIONS OR SYST EMS. 22 
 
 (2) “GENERAL–PURPOSE ARTIFICIAL INTELLIGE NCE MODEL ” DOES 23 
NOT INCLUDE AN ARTIF ICIAL INTELLIGENCE M ODEL THAT IS USED FO R 24 
DEVELOPMENT , PROTOTYPING , AND RESEARCH ACTIVIT IES BEFORE THE 25 
ARTIFICIAL INTELLIGE NCE MODEL IS RELEASE D ON THE MARKET . 26 
 
 (I) “GENERATIVE ARTIFICIAL INTELLIGENC E” MEANS ARTIFICIAL 27 
INTELLIGENCE THAT IS CAPABLE OF AND USED TO PRODUCE SYNTHETIC CONTENT, 28 
INCLUDING AUDIO , IMAGES, TEXT, AND VIDEOS. 29 
   	SENATE BILL 936 	5 
 
 
 (J) “GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEM ” MEANS ANY 1 
ARTIFICIAL INTELLIGE NCE SYSTEM OR SERVIC E THAT INCORPORATES GENERATIVE 2 
ARTIFICIAL INTELLIGE NCE. 3 
 
 (K) (1) “HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ” MEANS AN 4 
ARTIFICIAL INTELLIGE NCE SYSTEM THAT IS SPECIFICALLY INTENDE D TO 5 
AUTONOMOUSLY MAKE , OR BE A SUBSTANTIAL FACTOR IN MAKING , A 6 
CONSEQUENTIAL DECISI ON. 7 
 
 (2) “HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ” DOES NOT 8 
INCLUDE: 9 
 
 (I) AN ARTIFICIAL INTELLIGE NCE SYSTEM THAT IS I NTENDED 10 
TO:  11 
 
 1. PERFORM A NARROW PROC EDURAL TASK; 12 
 
 2. IMPROVE THE RESULT OF A PREVIOUSLY COMPLET ED 13 
HUMAN ACTIVITY ;  14 
 
 3. DETECT ANY DECISION–MAKING PATTERNS OR ANY 15 
DEVIATIONS FROM PREEXISTING DECISION–MAKING PATTERNS ; OR 16 
 
 4. PERFORM A PREPARATORY TASK TO AN ASSESSMEN T 17 
RELEVANT TO A CONSEQ UENTIAL DECISION ; OR 18 
 
 (II) THE FOLLOWING TECHNOL OGIES: 19 
 
 1. ANTIFRAUD TECHNOLOGY THAT DOES NOT USE 20 
FACIAL RECOGNITION T ECHNOLOGY ; 21 
 
 2. ARTIFICIAL INTELLIGEN CE–ENABLED VIDEO GAME 22 
TECHNOLOGY ; 23 
 
 3. ANTIMALWARE AND ANTIVIRUS TECHNOLOGY ; 24 
 
 4. AUTONOMOUS VEHICLE TE CHNOLOGY;  25 
 
 5. CALCULATOR S;  26 
 
 6. CYBERSECURITY TECHNOLOGY ;  27 
 
 7. DATABASES AND DATA STORAGE ;  28  6 	SENATE BILL 936  
 
 
 
 8. FIREWALL TECHNOLOGY ;  1 
 
 9. INTERNET DOMAIN REGIS TRATION;  2 
 
 10. INTERNET WEBSITE LOADING ;  3 
 
 11. NETWORKING ;  4 
 
 12. SPAM AND ROBOCALL FILTERING;  5 
 
 13. SPELLCHECKING TECHNOLOGY ; 6 
 
 14. SPREADSHEET S;  7 
 
 15. WEB CACHING;  8 
 
 16. WEB HOSTING OR SIMILAR T ECHNOLOGY ; OR 9 
 
 17. TECHNOLOGY THAT 	COMMUNICATES WITH 10 
CONSUMERS IN NATURAL LANGUAGE FOR THE PUR POSE OF PROVIDING US ERS WITH 11 
INFORMATION , MAKING REFERRALS OR RECOMMENDATIONS , AND ANSWERING 12 
QUESTIONS AND IS SUBJECT TO AN ACCEPTABLE USE PO LICY THAT PROHIBITS 13 
GENERATING CONTENT T HAT IS DISCRIMINATOR Y OR UNLAWFUL. 14 
 
 (L) (1) “INTENTIONAL AND SUBST ANTIAL MODIFICATION ” MEANS A 15 
DELIBERATE CHANGE MA DE TO: 16 
 
 (I) AN ARTIFICIAL INTELLI GENCE SYSTEM THAT RESULTS IN A 17 
NEW MATERIAL RISK OF ALGORITHMIC DISCRIMINATION ; OR 18 
 
 (II) A GENERAL–PURPOSE ARTIFICIAL I NTELLIGENCE MODEL 19 
THAT: 20 
 
 1. AFFECTS COMPLIANCE OF THE GENERAL –PURPOSE 21 
ARTIFICIAL INTELLIGE NCE MODEL; 22 
 
 2. MATERIALLY CHANGES TH E PURPOSE OF THE 23 
GENERAL–PURPOSE ARTIFICIAL I NTELLIGENCE MODEL ; OR 24 
 
 3. RESULTS IN ANY NEW RE ASONABLY FORESEEABLE 25 
RISK OF ALGORITHMIC DISCRIMINATION . 26 
   	SENATE BILL 936 	7 
 
 
 (2) “INTENTIONAL AND SUBST ANTIAL MODIFICATION ” DOES NOT 1 
INCLUDE A CHANGE MADE TO A HIG H–RISK ARTIFICIAL INTE LLIGENCE SYSTEM, OR 2 
THE PERFORMANCE OF A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , IF: 3 
 
 (I) THE HIGH –RISK ARTIFICIAL INTE LLIGENCE SYSTEM 4 
CONTINUES TO LEARN A FTER THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM IS 5 
DEPLOYED OR O FFERED, SOLD, LEASED, LICENSED, GIVEN, OR OTHERWISE MADE 6 
AVAILABLE TO A DEPLO YER; AND  7 
 
 (II) THE CHANGE: 8 
 
 1. IS MADE TO THE HIGH–RISK ARTIFICIAL 9 
INTELLIGENCE SYSTEM AS A RESULT OF ANY L EARNING DESCRIBED IN ITEM (I) OF 10 
THIS PARAGRAPH ; 11 
 
 2. WAS PREDETERMINED BY THE DEPLOYER OR THE 12 
THIRD PARTY CONTRACTED BY THE DEPLOYER ; AND  13 
 
 3. INCLUDED AND CONCLUDED WITHIN THE INITIAL 14 
IMPACT ASSESSMENT OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM 15 
UNDER § 14–47A–04(C) OF THIS SUBTITLE. 16 
 
 (M) (1) “PERSON” MEANS AN INDIVIDUAL , AN ASSOCIATION, A 17 
COOPERATIVE , A CORPORATION , A LIMITED LIABILITY CO MPANY, A PARTNERSHIP , 18 
A TRUST, A JOINT VENTURE , OR ANY OTHER LEGAL OR COMMERCIAL ENTITY AND 19 
ANY SUCCESSOR , REPRESENTATIVE , AGENCY, OR INSTRUMENTALITY T HEREOF. 20 
 
 (2) “PERSON” DOES NOT INCLUDE A GOVERNMENTAL UNIT. 21 
 
 (N) “PRINCIPAL BASIS” MEANS THE USE OF AN OUTPUT OF A HIGH –RISK 22 
ARTIFICIAL INTELLIGE NCE SYSTEM TO MAKE A DECISION WITHOUT : 23 
 
 (1) HUMAN REVIEW , OVERSIGHT, INVOLVEMENT , OR INTERVENTION ; 24 
OR 25 
 
 (2) MEANINGFUL CONSIDERAT ION BY A HUMAN . 26 
 
 (O) (1) “SUBSTANTIAL FACTOR ” MEANS A FACTOR GENERATED BY AN 27 
ARTIFICIAL INTELLIGE NCE SYSTEM THAT IS: 28 
 
 (I) THE PRINCIPAL BASIS F OR MAKING A CONSEQUENTI AL 29 
DECISION; OR 30 
  8 	SENATE BILL 936  
 
 
 (II) CAPABLE OF ALTERING T	HE OUTCOME OF A 1 
CONSEQUENTIAL DECISI ON. 2 
 
 (2) “SUBSTANTIAL FACTOR ” INCLUDES ANY USE OF AN ARTIFI CIAL 3 
INTELLIGENCE SYSTEM TO GENERATE ANY CONT ENT, DECISION, PREDICTION, OR 4 
RECOMMENDATION CONCE RNING A CONSUMER THA T IS USED AS THE PRINCIPAL 5 
BASIS TO MAKE A CONS EQUENTIAL DECISION C ONCERNING THE CONSUM ER. 6 
 
 (P) “SYNTHETIC CONTENT” MEANS INFORMATION , SUCH AS AUDIO CLIPS, 7 
IMAGES, TEXT, AND VIDEO, THAT IS PRODUCED OR SIGNIFICANTLY MODIFI ED OR 8 
GENERATED BY ALGORITHMS , INCLUDING BY AN ARTIFICIAL INTELL IGENCE 9 
SYSTEM. 10 
 
14–47A–02. 11 
 
 THIS SUBTITLE DOES NOT APPLY TO : 12 
 
 (1) EXCEPT IN A SITUATION WHERE A HIGH–RISK ARTIFICIAL 13 
INTELLIGENCE SYSTEM IS USED TO MAKE, OR IS A SUBSTANTIAL FACTOR IN MAKING , 14 
A DECISION CONCERNIN G EMPLOYMENT OR HOUS ING, A DEVELOPER OR DEPLO YER 15 
THAT USES AN ARTIFICIAL INTELL IGENCE SYSTEM ACQUIR ED BY OR FO R THE 16 
FEDERAL GOVERNMENT O R ANY FEDERAL AGENCY OR DEPARTMENT , INCLUDING: 17 
 
 (I) THE U.S. DEPARTMENT OF COMMERCE; 18 
 
 (II) THE U.S. DEPARTMENT OF DEFENSE; AND  19 
 
 (III) THE NATIONAL AERONAUTICS AND SPACE 20 
ADMINISTRATION ; 21 
 
 (2) AN INSURER, OR A HIGH–RISK ARTIFICIAL INTELLIGEN CE SYSTEM 22 
DEVELOPED OR DEPLOYE D BY AN INSURER FOR USE IN THE BUSINESS OF 23 
INSURANCE, IF THE INSURER IS REGULATED AND SUPERVISED BY TH E INSURANCE 24 
ADMINISTRATION AND SU BJECT TO THE PROVISI ONS UNDER TITLE 13 OF THIS 25 
ARTICLE; OR 26 
 
 (3) A DEVELOPER, A DEPLOYER, OR ANY OTHER PERSON WHO : 27 
 
 (I) IS A COVERED ENTITY UNDER THE FEDERAL HEALTH 28 
INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996 UNDER 42 U.S.C. § 29 
1320D THROUGH 1320D–9, AND THE CORRESPONDIN G FEDERAL REGULATION S; 30 
AND 31 
   	SENATE BILL 936 	9 
 
 
 (II) PROVIDES: 1 
 
 1. HEALTH CARE RECOMMEND ATIONS GENERATED BY 2 
AN ARTIFICIAL INTELL IGENCE SYSTEM IN WHI CH A HEALTH CARE PRO VIDER IS 3 
REQUIRED TO TAKE ACT	ION TO IMPLEMENT THE HEALTH CARE 4 
RECOMMENDATIONS ; OR  5 
 
 2. SERVICES USING AN ART IFICIAL INTELLIGENCE 6 
SYSTEM FOR AN ADMINISTRATIVE , FINANCIAL, QUALITY MEASUREMENT , SECURITY, 7 
OR PERFORMANCE IMPRO VEMENT FUNCTION . 8 
 
14–47A–03. 9 
 
 (A) (1) BEGINNING FEBRUARY 1, 2026, A DEVELOPER OF A 10 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM SHAL L USE REASONABLE CAR E TO 11 
PROTECT CONSUMERS FR OM ANY KNOWN OR REAS ONABLY FORESEEABLE R ISKS OF 12 
ALGORITHMIC DISCRIMI NATION.  13 
 
 (2) IN AN ENFORCEMENT ACT ION BROUGHT BY THE ATTORNEY 14 
GENERAL UNDER THIS SUBTITLE, THERE SHA LL BE A REBUTTABLE P RESUMPTION 15 
THAT A DEVELOPER USE D REASONABLE CARE AS REQUIRED UNDER THIS 16 
SUBSECTION IF THE DEVELOPER COMPLIE D WITH THE PROVISION S OF THIS 17 
SECTION. 18 
 
 (B) BEGINNING FEBRUARY 1, 2026, AND SUBJECT TO SUBSECTION (D) OF 19 
THIS SECTION, A DEVELOPER OF A HIGH –RISK ARTIFICIAL INTE LLIGENCE SYSTEM 20 
MAY NOT OFFER , SELL, LEASE, GIVE, OR OTHERWISE PROVIDE TO A DEPLOYER, OR 21 
OTHER DEVELOPER , A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , UNLESS THE 22 
DEVELOPER MAKES AVAI LABLE TO THE DEPLOYE R OR OTHER DEVE LOPER: 23 
 
 (1) A STATEMENT DISCLOSING THE INTEN DED USES OF THE  24 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ; 25 
 
 (2) DOCUMENTATION DISCLOS ING:  26 
 
 (I) THE KNOWN OR REASONAB LY FORESEEABLE LIMIT ATIONS 27 
OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , INCLUDING THE KNOWN OR 28 
REASONABLY FORESEEAB LE RISKS OF ALGORITH MIC DISCRIMINATION A RISING 29 
FROM THE INTENDED US ES OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ; 30 
 
 (II) THE PURPOSE OF 	THE HIGH–RISK ARTIFICIAL 31 
INTELLIGENCE SYSTEM AND THE INTENDED BEN EFITS AND USES OF THE HIGH–RISK 32 
ARTIFICIAL INTELLIGE NCE SYSTEM;  33  10 	SENATE BILL 936  
 
 
 
 (III) A SUMMARY DESCRIBING T HE MANNER IN WHICH T HE 1 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM WAS 	EVALUATED FOR 2 
PERFORMANCE BEFORE T HE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM WAS 3 
LICENSED, SOLD, LEASED, GIVEN, OR OTHERWISE MADE AV AILABLE TO A DEPLOYE R; 4 
 
 (IV) THE MEASURES THE DEVELOPER HAS TAKEN TO MITIGAT E 5 
REASONABLY FORESEEAB LE RISKS OF ALGORITH MIC DISCRIMINATION T HAT THE 6 
DEVELOPER KNOWS ARIS ES FROM DEPLOYMENT OR U SE OF THE HIGH –RISK 7 
ARTIFICIAL INTELL IGENCE SYSTEM ; AND 8 
 
 (V) THE MANNER IN WHICH A N INDIVIDUAL CAN USE THE  9 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM AND MONITOR THE PERFORMA NCE 10 
OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM FOR RISK OF ALGORITHMIC 11 
DISCRIMINATION ; 12 
 
 (3) DOCUMENTATION DESCRIBING:  13 
 
 (I) THE MANNER IN WHICH T HE HIGH–RISK ARTIFICIAL 14 
INTELLIGENCE SYSTEM WAS EVALUATED FOR PE RFORMANCE , AND MITIGATION OF 15 
ALGORITHMIC DISCRIMI NATION BEFORE THE HIGH–RISK ARTIFICIAL 16 
INTELLIGENCE SYSTEM WAS OFFERED , SOLD, LEASED, LICENSED, GIVEN, OR 17 
OTHERWISE MADE AVAIL ABLE TO THE DEPLOYER OR OTHER DEVELOPER ; 18 
 
 (II) A HIGH–LEVEL SUMMARY OF THE MANNER IN WHICH DATA 19 
SOURCES WERE EVALUAT ED FOR POTENTIAL BIA S AND APPROPRIATE MI TIGATIONS 20 
WERE APPLIED ; 21 
 
 (III) THE INTENDED OUTPUTS OF THE HIGH–RISK ARTIFICIAL 22 
INTELLIGENCE SYSTEM ; 23 
 
 (IV) THE MEASURES THE DEVE LOPER HAS TAKEN TO M ITIGATE 24 
ANY KNOWN OR REASONA	BLY FORESEEABLE RISK S OF ALGORITHMIC 25 
DISCRIMINATION THAT MAY ARISE FROM REASO NABLY FORESEEABLE DE PLOYMENT 26 
OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM; AND  27 
 
 (V) THE MANNER IN WHICH THE HIGH–RISK ARTIFICIAL 28 
INTELLIGENCE SYSTEM SHOULD BE USED , NOT USED, AND MONITORED BY AN 29 
INDIVIDUAL WHEN BEING USED; AND 30 
 
 (4) ANY ADDITIONAL DOCUME NTATION THAT IS REAS ONABLY 31 
NECESSARY TO ASSIST A DEPLOYER TO : 32 
   	SENATE BILL 936 	11 
 
 
 (I) UNDERSTAND THE OUTPUT	S OF THE HIGH–RISK 1 
ARTIFICIAL INTELLIGE NCE SYSTEM; AND  2 
 
 (II) MONITOR THE PERFORMAN CE OF THE HIGH–RISK 3 
ARTIFICIAL INTELLIGE NCE SYSTEM FOR RISKS OF ALGORITHMIC DISCR IMINATION. 4 
 
 (C) (1) SUBJECT TO SUBSECTION (D) OF THIS SECTIO N, A DEVELOPER 5 
THAT, ON OR AFTER FEBRUARY 1, 2026, OFFERS, SELLS, LEASES, LICENSES, GIVES, 6 
OR OTHERWISE MAKES A VAILABLE TO A DEPLOY ER OR OTHER DEVELOPE R A  7 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , TO THE EXTENT PRACTICABLE , 8 
SHALL MAKE AVAILABLE TO DE PLOYERS AND OTHER DE VELOPERS THE 9 
DOCUMENTATION AND IN FORMATION NECESSARY FOR A DEPLOYER OR TH IRD 10 
PARTY CONTRACTED BY A DEPLOYER TO COMPLE TE AN IMPACT ASSESSM ENT 11 
UNDER § 14–47A–04(C) OF THIS SUBTITLE.  12 
 
 (2) THE DEVELOPER SHALL M	AKE DOCUMENTATION AND 13 
INFORMATION AVAILABL E AS REQUIRED UNDER PA RAGRAPH (1) OF THIS 14 
SUBSECTION THROUGH: 15 
 
 (I) ARTIFACTS, INCLUDING MODEL CARD FILES THAT 16 
ACCOMPANY THE MODEL AND PROVIDE INFORMAT ION ABOUT DISCOVERAB ILITY, 17 
REPRODUCIBILITY , AND SHARING; 18 
 
 (II) DATASET CARD FILES THAT: 19 
 
 1. ARE USED TO INFORM US ERS ABOUT HOW TO 20 
RESPONSIBLY USE THE DATA IN A DATASET ; AND 21 
 
 2. CONTAIN INFORMATION A BOUT POTENTIAL BIASE S 22 
OF THE DATA; OR  23 
 
 (III) OTHER IMPACT ASSESSME NTS. 24 
 
 (D) (1) FOR ANY DISCLOSURE REQUIRED UNDER THIS SECTION , A 25 
DEVELOPER , NOT LATER THAN 90 DAYS AFTER THE DEVEL OPER PERFORMS AN 26 
INTENTIONAL AND SUBS TANTIAL MODIFICATION TO A HIGH–RISK ARTIFICIAL 27 
INTELLIGENCE SYSTEM , SHALL UPDATE THE DISCLOSUR E TO ENSURE ACCURACY .  28 
 
 (2) A DEVELOPER THAT ALSO S ERVES AS A DEPLOYER FOR ANY  29 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM IS N OT REQUIRED TO GENER ATE 30 
THE DOCUMENTATION RE QUIRED UNDER THIS SE CTION UNLESS THE HIG H–RISK 31 
ARTIFICIAL INTELLIGE NCE SYSTEM IS PROVID ED TO AN UNAFFILIATE D ENTITY 32 
ACTING AS A DEPLOYER O R AS OTHERWISE REQUI RED BY LAW. 33  12 	SENATE BILL 936  
 
 
 
 (E) A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM SHAL L BE PRESUMED 1 
TO SATISFY THE APPLICABLE REQUIREMENTS UNDER T HIS SECTION AND ANY 2 
REGULATIONS ADOPTED IN ACCORDANCE WITH T HIS SUBTITLE IF THE HIGH–RISK 3 
ARTIFICIAL INTELLIGEN CE SYSTEM IS IN CONF ORMITY WITH THE LATE ST VERSION 4 
OF: 5 
 
 (1) THE ARTIFICIAL INTELLIGENCE RISK MANAGEMENT 6 
FRAMEWORK PUBLISHED B Y THE NATIONAL INSTITUTE OF STANDARDS AND 7 
TECHNOLOGY ; 8 
 
 (2) STANDARD ISO/IEC 42001 OF THE INTERNATIONAL 9 
ORGANIZATION FOR STANDARDIZATION ; OR  10 
 
 (3) ANOTHER NATIONALLY OR INTERNATIONALLY RECO GNIZED RISK 11 
MANAGEMENT FRAMEWORK FOR ARTIFICIAL INTEL LIGENCE SYSTEMS THAT IS 12 
SUBSTANTIALLY EQUIVA LENT TO, AND AT LEAST AS STRI NGENT AS, THE 13 
REQUIREMENTS ESTABLI SHED UNDE R THIS SECTION.  14 
 
 (F) THIS SECTION MAY NOT BE CONSTRUED TO REQU IRE A DEVELOPER TO 15 
DISCLOSE ANY INFORMA TION: 16 
 
 (1) THAT IS A TRADE SECRE T, AS DEFINED IN § 11–1201 OF THIS 17 
ARTICLE, OR OTHERWISE PROTECT ED FROM DISCLOSURE U NDER STATE OR 18 
FEDERAL LAW ; OR 19 
 
 (2) THE DISCLOSURE OF WHI CH WOULD: 20 
 
 (I) PRESENT A SECURITY RI SK TO THE DEVELOPER ; OR  21 
 
 (II) REQUIRE THE DEVELOPER TO DISCLOSE CONFIDEN TIAL OR 22 
PROPRIETARY INFORMAT ION. 23 
 
 (G) (1) EACH DEVELOPER OF A G ENERATIVE ARTIFICIAL INTELLIGENCE 24 
SYSTEM THAT GENERATE S OR MODIFIES SYNTHE TIC CONTENT SHALL EN SURE THAT 25 
THE OUTPUTS OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM : 26 
 
 (I) ARE MARKED AT THE TIME THE OUTP UT IS GENERATED AND 27 
IN A MANNER THAT IS DETECTABLE BY CONSUM ERS; AND 28 
 
 (II) COMPLY WITH ANY ACCESSIBILI TY REQUIREMENTS . 29 
   	SENATE BILL 936 	13 
 
 
 (2) FOR SYNTHETIC CONTENT THAT IS AN AUDIO, IMAGE, OR VIDEO 1 
FORMED AS PART OF AN ARTISTIC , CREATIVE, SATIRICAL, FICTIONAL ANALOGOUS 2 
WORK OR PROGRAM , A MARKING OF HIGH –RISK ARTIFICIAL INTE LLIGENCE 3 
SYSTEMS UNDER PARAG RAPH (1) OF THIS SUBSECTION SHALL BE LIMITED IN A 4 
MANNER NOT TO HINDER THE DISPLA Y OR ENJOYMENT OF THE WORK OR PROGRAM .  5 
 
 (3) THE MARKING OF OUTPUT S REQUIRED UNDER PARAGRAPH (1) OF 6 
THIS SUBSECTION DOES NOT APPLY TO:  7 
 
 (I) SYNTHETIC CONTENT THA T: 8 
 
 1. CONSISTS EXCLUSIVELY OF TEXT; 9 
 
 2. IS PUBLISHED TO INFOR M THE PUBLIC ON ANY 10 
MATTER OF PUBLIC INT EREST; OR  11 
 
 3. IS UNLIKELY TO MISLEA D A REASONABLE PERSO N 12 
CONSUMING THE SYNTHETIC CONTENT ; OR  13 
 
 (II) THE OUTPUTS OF A HIGH –RISK ARTIFICIAL INTE LLIGENCE 14 
SYSTEM THAT: 15 
 
 1. PERFORMS AN ASSISTIVE FUNCTIO N FOR STANDARD 16 
EDITING; 17 
 
 2. DOES NOT SUBSTANTIALLY AL TER THE INPUT DATA 18 
PROVIDED BY THE DEVE LOPER, OR IS USED TO DETECT , PREVENT, OR INVESTIGATE; 19 
OR  20 
 
 3. PROSECUTE S A CRIME AS AUTHORIZE D BY LAW. 21 
 
14–47A–04. 22 
 
 (A) (1) BEGINNING FEBRUARY 1, 2026, EACH DEPLOYER SHALL USE 23 
REASONABLE CARE TO P ROTECT CONSUMERS FRO M ANY KNOWN OR REASO NABLY 24 
FORESEEABLE RISKS OF ALGORITHMIC DISCRIMI NATION.  25 
 
 (2) IN AN ENFORCEMENT ACT ION BROUGHT BY THE ATTORNEY 26 
GENERAL UNDER THIS SUBTITLE, THERE SHALL BE A REB UTTABLE PRESUMPTION 27 
THAT A DEPLOYER USED REASONABLE CARE AS R EQUIRED UNDER THIS 28 
SUBSECTION IF THE DEPLOYER COMPLIED WITH THE REQUIREMENTS OF THIS 29 
SECTION. 30 
  14 	SENATE BILL 936  
 
 
 (B) (1) BEGINNING FEBRUARY 1, 2026, AND SUBJECT TO PARAG RAPH (2) 1 
OF THIS SUBSECTION , A DEPLOYER OF A HIGH –RISK ARTIFICIAL INTE LLIGENCE 2 
SYSTEM SHALL DESIGN, IMPLEMENT, AND MAINTAIN A RISK MANAGEMENT POLICY 3 
AND PROGRAM FOR THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM .  4 
 
 (2) EACH RISK MANAGEMENT POLI CY DESIGNED, IMPLEMENTED , 5 
AND MAINTAINED IN AC CORDANCE WITH PARAGR APH (1) OF THIS SUBSECTION 6 
SHALL: 7 
 
 (I) SPECIFY THE PRINCIPLE S, PROCESSES, AND PERSONNEL 8 
THAT THE DEPLOYER US ES TO IDENTIFY, MITIGATE, AND DOCUMENT ANY RISK OF 9 
ALGORITHMIC DISCRIMINATION T HAT IS A REASONABLY FORESEEAB LE 10 
CONSEQUENCE OF DEPLO YING OR USING THE HI	GH–RISK ARTIFICIAL 11 
INTELLIGENCE SYSTEM ; 12 
 
 (II) BE REGULARLY REVIEWED A ND UPDATED OVER THE LIFE 13 
CYCLE OF THE HIGH –RISK ARTIFICIAL INTE LLIGENCE SYSTEM ; AND 14 
 
 (III) BE REASONABL E AND IN CONSIDERATI ON OF THE 15 
GUIDANCE AND STANDAR DS PROVIDED IN THE LATEST VERSIO N OF:  16 
 
 1. THE ARTIFICIAL INTELLIGENCE RISK MANAGEMENT 17 
FRAMEWORK PUBLISHED B Y THE NATIONAL INSTITUTE OF STANDARDS AND 18 
TECHNOLOGY ;  19 
 
 2. STANDARD ISO/IEC 42001 OF THE INTERNATIONAL 20 
ORGANIZATION FOR STANDARDIZATION ; 21 
 
 3. ANOTHER NATIONALLY OR INTERN ATIONALLY 22 
RECOGNIZED RISK MANA GEMENT FRAMEWORK FOR ARTIFICIAL INTELLIGE NCE 23 
SYSTEMS WITH REQUIREMENTS THAT AR E SUBSTANTIALLY EQUI VALENT TO, AND AT 24 
LEAST AS STRINGENT A S, THE REQUIREMENTS ESTABLISHED UNDER THIS SECTION; 25 
OR 26 
 
 4. ANY RISK MANAGEMENT F	RAMEWORK FOR 27 
ARTIFICIAL INTELLIGE NCE SYSTEMS THAT THE ATTORNEY GENERAL MAY 28 
DESIGNATE AND IS SU BSTANTIALLY EQUIVALE NT TO, AND AT LEAST AS STRI NGENT 29 
AS, THE GUIDANCE AND STA NDARDS OF THE FRAMEWORK DESCRIBED IN ITEM 1 OF 30 
THIS ITEM. 31 
 
 (3) A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM SHAL L BE 32 
PRESUMED TO SATISFY THE REQUIREMENTS UND ER THIS SECTION AND ANY 33 
REGULATIONS ADOPTED IN ACCORDANCE WITH T HIS SUBTITLE IF THE HIGH–RISK 34   	SENATE BILL 936 	15 
 
 
ARTIFICIAL INTELLIGE NCE SYSTEM IS IN CON FORMITY WITH THE LAT EST VERSION 1 
OF: 2 
 
 (I) THE ARTIFICIAL INTELLIGENCE RISK MANAGEMENT 3 
FRAMEWORK PUBLISHED B Y THE NATIONAL INSTITUTE OF STANDARDS AND 4 
TECHNOLOGY ; 5 
 
 (II) STANDARD ISO/IEC 42001 OF THE INTERNATIONAL 6 
ORGANIZATION FOR STANDARDIZATION ; OR  7 
 
 (III) ANOTHER NATIONALLY OR INTERNATIONALLY 8 
RECOGNIZED RISK MANA GEMENT FRAMEWORK FOR ARTIFICIAL INTELLIGE NCE 9 
SYSTEMS THAT ARE SUB STANTIALLY EQUIVALEN T TO, AND AT LEAST AS STRI NGENT 10 
AS, THE REQUIREMENTS EST ABLISHED UNDER THIS SECTION.  11 
 
 (C) (1) SUBJECT TO PARAGRA PHS (2) AND (3) OF THIS SUBSECTION A ND 12 
EXCEPT AS PROVIDED IN PARAGRAPH (4) OF THIS SUBSECTION: 13 
 
 (I) ON OR AFTER FEBRUARY 1, 2026, BEFORE INITIAL 14 
DEPLOYMENT OF A HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM OR USE OF A 15 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM, A DEPLOYER SHALL COMPLETE AN 16 
IMPACT ASSESSMENT OF THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ; AND 17 
 
 (II) AT LEAST 90 DAYS BEFORE A SIGNIF ICANT UPDATE TO A 18 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM IS M ADE AVAILABLE , A DEPLOYER 19 
SHALL COMPLETE AN IMPACT ASSESSMENT OF THE HIGH–RISK ARTIFICIAL 20 
INTELLIGENCE SYSTEM IF THE UPDATE PRODUC ES A NEW VERSION OR RELEASE OR 21 
SIMILAR CHANGE TO THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM THAT: 22 
 
 1. INTRODUCES S IGNIFICANT CHANGES T O THE USE 23 
CASE OR KEY FUNCTION ALITY OF THE HIGH –RISK ARTIFICIAL INTE LLIGENCE 24 
SYSTEM; AND 25 
 
 2. RESULTS IN A NEW OR R EASONABL Y FORESEEABLE 26 
RISK OF ALGORITHMIC DISCRIMINATION . 27 
 
 (2) EACH IMPACT ASSESSMEN T COMPLETED UNDER PARAGRAPH (1) 28 
OF THIS SUBSECTION SHAL L INCLUDE: 29 
 
 (I) A STATEMENT BY THE DEP LOYER DISCLOSING THE 30 
PURPOSE AND INTENDED USE CASES A ND DEPLOYMENT CONTEX T OF, AND BENEFITS 31 
AFFORDED BY , THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM ; 32 
  16 	SENATE BILL 936  
 
 
 (II) AN ANALYSIS OF WHETHE R THE DEPLOYMENT OF THE 1 
HIGH–RISK ARTIFICIAL INTELLIGENCE SYSTEM : 2 
 
 1. POSES ANY KNOWN OR RE ASONABLY FORESEEABLE 3 
RISKS OF ALGORITHMIC DISCRIMINATION ; 4 
 
 2. THE NATURE OF 	ANY ALGORITHMIC 5 
DISCRIMINATION ; AND  6 
 
 3. THE STEPS THAT HAVE B EEN TAKEN TO MITIGAT E 7 
RISKS; 8 
 
 (III) FOR A POSTDEPLOYMENT 	IMPACT ASSESSMENT 9 
COMPLETED UNDER THIS SUBSECTION, AN ANALYSIS OF WHETH ER THE INTENDED 10 
USE CASES OF THE HIG H–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , AS UPDATED, 11 
WERE CONSISTENT WITH , OR VARIED FROM , THE DEVELOPER ’S INTENDED USES OF 12 
THE HIGH–RISK ARTIFICIAL IN TELLIGENCE SYSTEM ; 13 
 
 (IV) A DESCRIPTION OF THE C ATEGORIES OF DATA TH E  14 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM PROC ESSES AS INPUTS AND THE 15 
OUTPUTS THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM PROD UCES; 16 
 
 (V) IF THE DEPLOYER USED DATA TO CUSTOMIZE TH E  17 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , AN OVERVIEW OF THE C ATEGORIES 18 
OF DATA THE DEPLOYER USED TO CUSTOMIZE THE HIGH–RISK ARTIFICIAL 19 
INTELLIGENCE SYSTEM ; 20 
 
 (VI) A LIST OF METRICS USED TO EVAL UATE THE PERFORMANCE 21 
AND KNOWN LIMITATION S OF THE HIGH–RISK ARTIFICIAL INTELLIGE NCE SYSTEM; 22 
 
 (VII) A DESCRIPTION OF TRANS PARENCY MEASURES TAK EN 23 
CONCERNING THE HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , INCLUDING ANY 24 
MEASURES TAKEN TO DI SCLOSE TO A CONSUMER THAT A HIGH–RISK ARTIFICIAL 25 
INTELLIGENCE SYSTEM IS IN USE WHEN THE CONSUMER IS ENGAGING OR 26 
INTERACTING WITH A S YSTEM OR PRODUCT IN WHICH A HIGH–RISK ARTIFICIAL 27 
INTELLIGENCE SYSTEM IS IN USE; 28 
 
 (VIII) A DESCRIPTION OF POSTD EPLOYMENT MONITORING AND 29 
USER SAFEGUARDS RELATED TO THE HIGH–RISK ARTIFICIAL INTELLIGENCE 30 
SYSTEM, INCLUDING THE OVERSIGHT PROCESS ESTABLISHED BY THE DEPLOYER TO 31 
ADDRESS ISSUES ARISI NG FROM DEPLOYMENT OR USE OF A HIGH–RISK ARTIFICIAL 32 
INTELLIGENCE SYSTEM ; AND 33 
   	SENATE BILL 936 	17 
 
 
 (IX) AN ANALYSIS OF THE VA LIDITY AND RELIABILI TY OF THE 1 
HIGH–RISK ARTIFICIAL INTELLIGE NCE SYSTEM , IN ACCORDANCE WITH 2 
CONTEMPORARY SOCIAL SCIENCE STANDARDS AN D A DESCRIPTION OF M ETRICS TO 3 
EVALUATE PERFORMANCE AND KNOWN LIMITATION S OF THE HIGH –RISK 4 
ARTIFICIAL INTELLIGE NCE SYSTEM. 5 
 
 (3) A DEPLOYER SHALL MAINT AIN A COMPLETE D IMPACT 6 
ASSESSMENT OF A HIGH –RISK ARTIFICIAL INTE LLIGENCE SYSTEM REQU IRED 7 
UNDER THIS SECTION A ND ALL RECORDS CONCE RNING THE IMPACT ASSESSMENT 8 
FOR AT LEAST 3 YEARS. 9 
 
 (4) A SINGLE IMPACT ASSESS MENT MAY ADDRESS A C OMPARABLE 10 
SET OF HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEMS DEP LOYED BY A DEPLOYER . 11 
 
 (D) BEGINNING FEBRUARY 1, 2026, BEFORE A DEPLOYER DE PLOYS A  12 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM , THE DEPLOYER SHALL : 13 
 
 (1) NOTIFY THE CONSUMER T HAT THE DEPLOYER HAS DEPLOYED A 14 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM TO M AKE, OR BE A SUBSTANTIAL 15 
FACTOR IN MAKING , A CONSEQUENTIAL DECISI ON ABOUT THE CONSUMER ; AND 16 
 
 (2) PROVIDE TO THE CONSUM ER: 17 
 
 (I) A STATEMENT DISCLOSING THE PURPOSE OF THE  18 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM AND THE NATURE OF THE 19 
CONSEQUENTIAL DECISI ON;  20 
 
 (II) IF APPLICABLE , INFORMATION CONCERNI NG THE 21 
CONSUMER ’S RIGHT TO OPT OUT OF THE PROCESSIN G OF THE CONSUMER ’S 22 
PERSONAL DATA IN ACCORDANCE WITH STATE OR FEDERAL LAW; 23 
 
 (III) CONTACT INFORMATION F OR THE DEPLOYER; AND 24 
 
 (IV) A DESCRIPTION, IN PLAIN LANGUAGE , OF THE HIGH–RISK 25 
ARTIFICIAL INTELLIGE NCE SYSTEM, INCLUDING: 26 
 
 1. THE PERSONAL CHARACTE RISTICS OR ATTRIBUTE S 27 
THE ARTIFICIAL INTEL LIGENCE SYSTEM WILL MEASURE OR ASSESS AN D THE 28 
METHOD BY WHICH THE 	SYSTEM MEASURES OR A	SSESSES PERSONAL 29 
CHARACTERISTICS OR A TTRIBUTES; 30 
  18 	SENATE BILL 936  
 
 
 2. THE RELEVANCE OF PERSONAL CHARACTERIS TICS 1 
OR ATTRIBUTES TO CON SEQUENTIAL DECISIONS RELATED TO THE ARTIFICIAL 2 
INTELLIGENCE SYSTEM ; 3 
 
 3. ANY HUMAN COMPONENTS OF THE ARTIFICIAL 4 
INTELLIGENCE SYSTEM ; 5 
 
 4. THE MANNER IN WHICH A UTOMATED COMPONENTS 6 
OF THE ARTIFICIAL IN TELLIGENCE SYSTEM ARE USED TO INFORM CONSE QUENTIAL 7 
DECISIONS RELATED TO THE SYSTEM; AND 8 
 
 5. A DIRECT LINK TO A PUB LICLY ACCESSIBLE WEB PAGE 9 
ON THE DEPLOYER ’S WEBSITE THAT CONTAINS A DESCRIPTION IN PLAIN LANGUAGE 10 
OF THE LOGIC USED IN THE ARTIFICIAL INTEL LIGENCE SYSTEM , INCLUDING: 11 
 
 A. KEY PARAMETERS THAT A FFECT THE OUTPUT OF THE 12 
SYSTEM; 13 
 
 B. THE TYPE AND SOURCE O F DATA COLLECTED FRO M 14 
INDIVIDUALS AND PROCESSED BY THE SYSTEM IN MAKING OR ASSISTING IN MAKING 15 
A CONSEQUENTIAL DECI SION; AND  16 
 
 C. THE RESULTS OF THE MO ST RECENT IMPACT 17 
ASSESSMENT REQUIRED UNDER THIS SECTION.  18 
 
 (E) BEGINNING FEBRUARY 1, 2026, AND SUBJECT TO SUBSE CTION (F) OF 19 
THIS SECTION, A DEPLOYER THAT HAS DEPLOYED A HIGH–RISK ARTIFICIAL 20 
INTELLIGENCE SYSTEM SHALL, IF A CONSEQUENTIAL DECISI ON IS ADVERSE TO THE 21 
CONSUMER , PROVIDE TO THE CONSUMER : 22 
 
 (1) A STATEMENT DISCLOSING THE PRINCIPAL REASON OR REASONS 23 
FOR THE ADVERSE CONSEQUENTIA L DECISION, INCLUDING:  24 
 
 (I) THE DEGREE TO WHICH , AND MANNER IN WHICH , THE 25 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM CONT RIBUTED TO THE ADVERSE 26 
CONSEQUENTIAL DECISI ON; 27 
 
 (II) THE TYPE OF DATA THAT WERE PROCESSED BY THE  28 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM IN MAKING THE ADVERSE 29 
CONSEQUENTIAL DECISI ON; AND 30 
 
 (III) THE SOURCE OF THE DAT A DESCRIBED IN ITEM (II) OF THIS 31 
ITEM; AND 32   	SENATE BILL 936 	19 
 
 
 
 (2) AN OPPORTUNITY TO : 1 
 
 (I) CORRECT ANY INCORRECT PERSONAL DATA THAT T HE 2 
HIGH–RISK ARTIFICIAL INTE LLIGENCE SYSTEM PROC ESSED IN MAKING , OR USED AS 3 
A SUBSTANTIAL FACTOR IN MAKING, THE ADVERSE CONSEQUENTIA L DECISION; AND 4 
 
 (II) APPEAL THE ADVERSE CONSEQUENTIA L DECISION , 5 
ALLOWING FOR HUMAN REVIEW UNL ESS PROVIDING THIS OPPORTUNITY : 6 
 
 1. IS NOT IN THE BEST INTER EST OF THE CONSUMER ; OR  7 
 
 2. MAY CAUSE A DELAY THAT POSES A RISK TO THE LIFE 8 
OR SAFETY OF THE CONSUMER . 9 
 
 (F) THE DEPLOYER SHALL PR OVIDE THE INFORMATION REQUIRED UNDER 10 
SUBSECTION (E) OF THIS SUBSECTION : 11 
 
 (1) DIRECTLY TO THE CONSUMER ; 12 
 
 (2) IN PLAIN LANGUAGE THAT IS TRANSLATED T O ANY LANGUAGES IN 13 
WHICH THE DEPLOYER, IN THE ORDINARY COUR SE OF SUCH DEPLOYER ’S BUSINESS, 14 
PROVIDES CONTRACTS , DISCLAIMERS, SALE ANNOUNCEMENTS , AND ANY OTHER 15 
INFORMATION TO CONSU MERS; AND 16 
 
 (3) IN A FORMAT THAT IS ACC ESSIBLE TO CONSUMERS WITH 17 
DISABILITIES. 18 
 
14–47A–05. 19 
 
 (A) THE REQUIREMENTS OF T HIS SUBTITLE MAY NOT BE CONSTRUED TO 20 
RESTRICT A DEVELOPER ’S, A DEPLOYER’S, OR ANOTHER PERSON ’S ABILITY TO: 21 
 
 (1) COMPLY WITH FEDERAL , STATE, OR LOCAL LAW; 22 
 
 (2) COMPLY WITH A CIVIL , CRIMINAL, OR REGULATORY INQUIR Y, AN 23 
INVESTIGATION , OR A SUBPOENA OR SUMMONS BY A FEDERAL, STATE, LOCAL, OR 24 
OTHER GOVERNMENTAL A UTHORITY; 25 
 
 (3) COOPERATE WITH A LAW ENFORCEMENT AGENCY C ONCERNING 26 
CONDUCT OR ACTIVITY THAT THE DEVELOPER , DEPLOYER, OR OTHER PERSON 27 
REASONABLY AND IN GO OD FAITH BELIEVES MA Y VIOLATE FEDERAL , STATE, OR 28 
LOCAL LAW; 29  20 	SENATE BILL 936  
 
 
 
 (4) INVESTIGATE, ESTABLISH, EXERCISE, PREPARE FOR , OR DEFEND 1 
A LEGAL CLAIM; 2 
 
 (5) TAKE IMMEDIATE STEPS TO PROTECT AN INTERE ST THAT IS 3 
ESSENTIAL FOR THE LIF E OR PHYSICAL SAFETY OF A CONSUMER OR ANO THER 4 
INDIVIDUAL; 5 
 
 (6) (I) BY ANY MEANS OTHER TH AN FACIAL RECOGNITIO N 6 
TECHNOLOGY , PREVENT, DETECT, PROTECT AGAINST , OR RESPOND TO :  7 
 
 1. A SECURITY INCIDENT ; 8 
 
 2. A MALICIOUS OR DECEPTI VE ACTIVITY; OR  9 
 
 3. IDENTITY THEFT , FRAUD, HARASSMENT , OR ANY 10 
OTHER ILLEGAL ACTIVI TY;  11 
 
 (II) INVESTIGATE, REPORT, OR PROSECUTE THE PER SONS 12 
RESPONSIBLE FOR ANY ACTION DESCRIBED IN ITEM (I) OF THIS ITEM; OR  13 
 
 (III) PRESERVE TH E INTEGRITY OR SECUR ITY OF SYSTEMS; 14 
 
 (7) ENGAGE IN PUBLIC OR P	EER–REVIEWED SCIENTIFIC OR 15 
STATISTICAL RESEARCH IN THE PUBLIC INTERE ST THAT: 16 
 
 (I) ADHERES TO ALL OTHER APPLICABLE ETHICS AN D PRIVACY 17 
LAWS; AND  18 
 
 (II) IS APPROVED, MONITORED , AND GOVERNED BY AN 19 
INSTITUTIONAL REVIEW BOARD OR SIMILAR INDEPENDE NT OVERSIGHT ENTITY 20 
THAT DETERMINES : 21 
 
 1. WHETHER THE EXPECTED BENEFIT S OF THE 22 
RESEARCH OUTWEIGH TH E RISKS ASSOCIATED W ITH THE RESEARCH ; AND  23 
 
 2. WHETHER THE DEVELOPER OR DEPLOYER HAS 24 
IMPLEMENTED REASONABLE SAFEGUARD S TO MITIGATE THE RI SKS ASSOCIATED 25 
WITH THE RESEARCH ; 26 
 
 (8) CONDUCT RESEARCH , TESTING, AND DEVELOPMENT ACTI VITIES 27 
REGARDING AN ARTIFIC IAL INTELLIGENCE SYS TEM OR MODEL , OTHER THAN 28 
TESTING CONDUCTED UN DER REAL–WORLD CONDITIONS , BEFORE AN ARTIFICIAL 29   	SENATE BILL 936 	21 
 
 
INTELLIGENCE SYSTEM OR MODEL IS PLACED O N THE MARKET , DEPLOYED OR PUT 1 
INTO SERVICE, AS APPLICABLE; 2 
 
 (9) EFFECTUATE A PRODUCT RECALL; 3 
 
 (10) IDENTIFY AND REPAIR T ECHNICAL ERRORS THAT IMPAIR 4 
EXISTING OR INTENDED FUNCTIONALITY ; OR 5 
 
 (11) ASSIST ANOTHER DEVELO PER, DEPLOYER, OR PERSON WITH ANY 6 
OF THE OBLIGATIONS I MPOSED UNDER THIS SUBTITLE. 7 
 
 (B) THE OBLIGATIONS IMPOS ED ON DEVELOPERS , DEPLOYERS, OR OTHER 8 
PERSONS UNDER THIS SUBTITLE MAY NOT APPLY WHEN COMPLIANCE BY THE 9 
DEVELOPER , DEPLOYER, OR OTHER PERSON WOULD V IOLATE AN EVIDENTIAR Y 10 
PRIVILEGE UNDER THE LAWS OF THE STATE. 11 
 
14–47A–06. 12 
 
 (A) A PERSON WHO VIOLATES THIS SUBTITLE IS SUBJECT TO A FINE NO T 13 
EXCEEDING $1,000 AND, AS APPLICABLE , REASONABLE ATTORNEY ’S FEES, 14 
EXPENSES, AND COSTS, AS DETERMINED BY THE COURT.  15 
 
 (B) A PERSON WHO WILLFULLY VIOLATES THIS SUBTITLE IS SUBJECT TO A 16 
FINE OF AT LEAST $1,000 AND NOT EXCEEDING $10,000 AND, AS APPLICABLE , 17 
REASONABLE ATTORNEY ’S FEES, EXPENSES, AND COSTS, AS DETERMINED BY THE 18 
COURT.  19 
 
 (C) EACH VIOLATION OF THI S SUBTITLE IS A SEPARATE VIOLAT ION 20 
SUBJECT TO THE CIVIL PENALTIES IMPOSED UN DER THIS SECTION .  21 
 
14–47A–07. 22 
 
 (A) THE ATTORNEY GENERAL MAY ENFORCE THIS SUBTITLE. 23 
 
 (B) TO CARRY OUT THE REQU IREMENTS OF THIS SUBTITLE, THE ATTORNEY 24 
GENERAL MAY: 25 
 
 (1) REQUIRE THAT A DEVELO PER DISCLOSE TO THE ATTORNEY 26 
GENERAL: 27 
 
 (I) A STATEMENT OR DOCUMEN TATION DESCRIBED IN THIS 28 
SUBTITLE RELEVANT TO AN INVES TIGATION CONDUCTED B Y THE ATTORNEY 29 
GENERAL; AND 30  22 	SENATE BILL 936  
 
 
 
 (II) A RISK MANAGEMENT POLI	CY DESIGNED AND 1 
IMPLEMENTED , AN IMPACT ASSESSMENT CO MPLETED, OR A RECORD MAINTAINED 2 
UNDER THIS SUBTITLE RELEVANT TO AN INVES TIGATION CONDUCTED B Y THE 3 
ATTORNEY GENERAL; 4 
 
 (2) SUBJECT TO SUBSECTION (C) OF THIS SECTION, INITIATE A CIVIL 5 
ACTION AGAINST A PER SON THAT VIOLATES TH IS SUBTITLE; AND 6 
 
 (3) ADOPT REGULATIONS . 7 
 
 (C) (1) BEFORE BRINGING AN AC TION AGAINST A DEVEL OPER OR 8 
DEPLOYER FOR A VIOLA TION OF THIS SUBTITLE, THE ATTORNEY GENERAL, IN 9 
CONSULTATION WITH TH E DEVELOPER OR DEPLO YER, SHALL DETERMINE IF I T IS 10 
POSSIBLE TO CURE THE VIOLATION.  11 
 
 (2) IF IT IS POSSIBLE TO CURE THE VIOLATION , THE ATTORNEY 12 
GENERAL MAY ISSUE A N OTICE OF VIOLATION T O THE DEVELO PER OR DEPLOYER 13 
AND AFFORD THE DEVEL OPER OR DEPLOYER THE OPPORTUNITY TO CURE THE 14 
VIOLATION WITHIN 45 DAYS AFTER THE RECEIPT OF THE N OTICE OF VIOLATION .  15 
 
 (3) IN DETERMINING WHETHE R TO GRANT A DEVELOP ER OR 16 
DEPLOYER AN OPPORTUN ITY TO CURE A VIOLAT ION, THE ATTORNEY GENERAL 17 
SHALL CONSIDER :  18 
 
 (I) THE NUMBER OF VIOLATI ONS;  19 
 
 (II) THE SIZE AND COMPLEXI TY OF THE DEVELOPER OR 20 
DEPLOYER; 21 
 
 (III) THE NATURE AND EXTENT OF THE DEVELOPER ’S OR 22 
DEPLOYER’S BUSINESS;  23 
 
 (IV) WHETHER THERE IS A SUBSTANTIAL LIKELIHO OD OF 24 
INJURY TO THE PUBLIC ;  25 
 
 (V) THE SAFETY OF PERSONS OR PROPERTY ; AND 26 
 
 (VI) WHETHER A VIOLATION W AS LIKELY CAUSED BY A HUMAN 27 
OR TECHNICAL ERROR .  28 
 
 (4) IF A DEVELOPER OR DEP LOYER FAILS TO CURE A VIOLATION 29 
WITHIN 45 DAYS AFTER THE RECEIPT OF A NOT ICE OF VIOLATION UND ER 30   	SENATE BILL 936 	23 
 
 
PARAGRAPH (3) OF THIS SUBSECTION , THE ATTORNEY GENERAL MAY PROCEED 1 
WITH THE ACTION.  2 
 
 (D) IN AN ACTION BROUGHT BY THE ATTORNEY GENERAL UNDER THIS 3 
SECTION, IT IS AN AFFIRMATIVE DEFEN SE IF: 4 
 
 (1) A VIOLATION OF ANY PRO VISION OF THIS SUBTITLE IS 5 
DISCOVERED THROUGH RED –TEAMING, WHICH IS ADVERSARIAL TESTING 6 
CONDUCTED FOR THE PU RPOSE OF: 7 
 
 (I) IDENTIFYING T HE POTENTIAL ADVERSE BEHAVIORS OR 8 
OUTCOMES OF AN ARTIF ICIAL INTELLIGENCE S YSTEM; 9 
 
 (II) IDENTIFYING HOW THE BEHAVIORS OR OUTCOMES OCCUR ; 10 
AND  11 
 
 (III) STRESS TEST ING THE SAFEGUARDS AGAIN ST THE 12 
BEHAVIORS OR OUTCOME S; AND 13 
 
 (2) NOT LATER THAN 45 DAYS AFTER DISCOVERI NG A VIOLATION 14 
UNDER ITEM (1) OF THIS SUBSECTION , THE DEVELOPER OR DEP LOYER:  15 
 
 (I) CURES THE VIOLATION;  16 
 
 (II) PROVIDES NOTICE TO TH E ATTORNEY GENERAL IN A FORM 17 
AND MANNER PRESCRIBE D BY THE ATTORNEY GENERAL THAT THE VIOLATION HAS 18 
BEEN CURED AND EVIDENCE THAT AN Y HARM CAUSED BY SUC H VIOLATION HAS 19 
BEEN MITIGATED ; AND  20 
 
 (III) IS OTHERWISE IN COMPLIANCE WITH THE REQUIREMENT S 21 
OF THIS SUBTITLE.  22 
 
14–47A–08. 23 
 
 A CONSUMER MAY BRING A CIVIL ACTION AGAINST A DEPLOYER IF: 24 
 
 (1) THE CONSUMER INITIALL Y FILED A TIMELY ADM INISTRATIVE 25 
CHARGE OR COMPLAINT UNDER FEDERAL , STATE, OR LOCAL LAW ALLEGIN G A 26 
DISCRIMINATORY ACT B Y THE DEPLOYER AS A RESULT OF A CONSEQUENTIAL 27 
DECISION ABOUT THE CONSUMER T HAT IS MADE BY A HIGH –RISK ARTIFICIAL 28 
INTELLIGENCE SYSTEM OF THE DEPLOYER ; 29 
  24 	SENATE BILL 936  
 
 
 (2) AT LEAST 180 DAYS HAVE ELAPSED SI NCE THE DATE OF FILI NG OF 1 
THE ADMINISTRATIVE C OMPLAINT; AND 2 
 
 (3) THE CIVIL ACTION IS F ILED NOT MORE THAN 2 YEARS AFTER THE 3 
OCCURRENCE OF THE AL LEGED DISCRIMINATORY ACT.  4 
 
 SECTION 2. AND BE IT FURTHER ENACTED, That, if any provision of this Act or 5 
the application of any provision of this Act to any person or circumstance is held invalid for 6 
any reason in a court of competent jurisdiction, the invalidity does not affect other 7 
provisions or any other application of this Act that can be given effect without the invalid 8 
provision or application, and for this purpose the provisions of this Act are declared 9 
severable. 10 
 
 SECTION 3. AND BE IT FURTHER ENACTED, That this Act shall take effect 11 
October 1, 2025. 12