Benesch, Friedlander, Coplan & Aronoff LLP Benesch, Friedlander, Coplan & Aronoff LLP
PeopleServices

Menu

  • People
  • Services
  • Resources
  • Locations
  • Careers
  • About
  • Contact
New Hampshire Joins Data Protection Trend, Passes Comprehensive Data Protection Law
  1. Resources
July 23, 2025

Texas Joins Growing State-by-State AI Regulation in Enacting Comprehensive AI System Law

Client Bulletins
Authors : Megan C. Parker, Kristopher J. Chandler

The broad applicability of Texas’s comprehensive artificial intelligence legislation and upcoming effective date will require developers and deployers of AI systems to act quickly in ensuring compliance.

On June 22, 2025, Texas Gov. Greg Abbott signed into law HB 149, the Texas Responsible AI Governance Act (“TRAIGA”), becoming the fourth state following Colorado, Utah, and California to pass AI-specific legislation.

Set to go into effect on January 1, 2026, TRAIGA follows the Colorado’s Consumer Protections for Interactions with AI law and the EU AI Act in establishing comprehensive requirements for AI developers and deployers, categorical restrictions on the development and deployment of certain AI systems for certain purposes, and requiring disclosure of AI usage when it interacts with consumers.

Scope

Broader than current similar state regulations that only target “high risk” AI Systems, TRAIGA captures a wide sweeping definition of AI systems, as any machine-based system that “infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

Several of TRAIGA’s restrictions, regulations, and obligations will depend on whether a developer or deployer is either a (1) covered person or entity, or (2) government entities.

  • A developer is “a person who develops an [AI] system that is offered, sold, leased, given, or otherwise provided in” Texas, whereas a deployer is “a person who deploys an [AI] system for use in” Texas.
  • A covered person or entity is any person who (1) promotes, advertises or conducts business in Texas, (2) produces a product or service Texas residents use, or (3) develops or deploys an AI system in Texas.

Government entities subject to TRAIGA’s new restrictions and obligations include “any department, commission, board, office, authority, or other administrative unit” of or part of any political subdivision in Texas, “that exercises governmental functions under the authority of” Texas law. Importantly, TRAIGA specifically excludes hospital districts and higher education institutions.

The act protects all consumers, defined as a Texas resident “acting only in an individual or household context.” Key to this definition is that individuals “acting in a commercial or employment context” are not included as part of TRAIGA’s AI use disclosure obligations.

Requirements and Obligations

The focus of TRAIGA is to prohibit intentional discriminatory, manipulative, or other harmful uses of AI systems, as well as to provide consumers with clear and conspicuous notice of when they interact with an AI system.

Prohibitions on Certain AI Systems

AI systems are prohibited from being developed or deployed:

  • To intentionally encourage any person to physically harm themselves or others, or to engage in criminal activity;
  • With the sole intent of infringing, restricting, or impairing a person’s federal Constitutional rights;
  • To unlawfully discriminate against a federal or state protected class, in which a disparate impact alone is insufficient absent evidence of intent; or
  • With the sole intent of producing, assisting, or aiding in producing or distributing child pornography or unlawful deepfake videos or images, including explicit text-based conversations while impersonating a minor under the age of 18.

Several of these prohibitions are illegal in Texas absent the use of AI, and TRAIGA provides they should be broadly construed and applied to promote facilitating responsible development of AI and protecting the public from foreseeable risks associated with AI.

TRAIGA imposes further restrictions on governmental entities, including prohibiting use of AI systems:

  • For social scoring that may infringe individuals’ rights or result in other detrimental or unfavorable consequences to individuals; and
  • To identify individuals using biometric data, images, or other media captured from the internet without their consent, if gathering the data would infringe on individuals’ constitutional or other federal or Texas rights.

While existence of online images or videos, unless posted by the individual, does not constitute consent, TRAIGA includes several exemptions from the consent requirement, including for financial institutions using voice print data, training AI systems not solely intended for identifying individuals, and fraud detection or similar security measures.

AI System Use Disclosure

A governmental agency that makes available an AI system intended to interact with consumers is required to provide a clear, conspicuous, plain language written disclosure to each consumer, either before or at the time of the interaction, that they are interacting with an AI system and not a live person.

Next, whenever an AI system is used in relation to a health care service or treatment, the provider is required to provide a clear, conspicuous, plain language written disclosure of such use no later than the date of the service or treatment is first provided, or in emergency cases, as soon as reasonably possible.

Regulatory Sandbox Program

In addition to the restrictions and prohibitions, TRAIGA creates a new regulatory sandbox program administered by the Department of Information Resources (“DIR”) designed to support development and testing of AI systems under relaxed constraints.

Parties must submit an application that includes (1) a detailed description of the AI system to be tested; (2) a benefit assessment addressing consumer, privacy, and public safety impacts; (3) mitigation plans in case of adverse consequences; and (4) proof of compliance with federal AI laws and regulations.

Accepted applicants will have 36 months to test and develop their AI systems, during which the Attorney General cannot file charges nor can state agencies pursue punitive action for violations waived under TRAIGA.

Meanwhile, participants are then required to submit quarterly reports to DIR detailing system performance metrics, updates on how the AI system mitigates risk, and feedback from consumers and stakeholders. DIR will then submit annual reports based on these quarterly reports to the Texas legislature to make recommendations for future legislation.

Texas Artificial Intelligence Advisory Council

Lastly, TRAIGA establishes the Texas AI Advisory Council (the “TAIAC”), comprised of seven members appointed by the governor, lieutenant governor, and speaker of the house, to conduct AI training programs for state agencies and local governments. The TAIAC is also tasked with issuing reports on topics like data privacy and security, AI ethics, and legal risks and compliance.

Enforcement and Penalties

The Texas State Attorney General has the exclusive authority to enforce violations, with civil penalties ranging from $10,000 to $12,000 per curable violation, $80,000 to $200,000 per uncurable violation, and $2,000 to $40,000 per day for ongoing violations.

State agencies may also sanction a party that is licensed by such agency, if that party is found liable for TRAIGA violations and the Attorney General recommends additional enforcement, by suspending or revoking the party’s license and monetary penalties up to $100,000.

Further, the Attorney General must develop an online reporting mechanism to facilitate consumer complaints of potential violations, similar to the mechanism used in conjunction with the Texas Data Privacy and Security Act.

Following the receipt of a complaint, the Attorney General may request the following information pertaining to an AI system:

  • A high-level description of its purpose, intended use, deployment context and associated benefits;
  • A description of its programming or training data;
  • High-level descriptions of data processed as inputs and outputs produced;
  • Any metrics used to evaluate its performance;
  • Any known limitations of the system;
  • A high-level description of the post-deployment monitoring and safeguards used for the system; or
  • Any other documents reasonably necessary for the attorney general’s investigation.

Importantly, a developer or deployer cannot be liable for an end user or other third party using an AI system for a prohibited use—affirming TRAIGA’s focus on the developer or deployer’s intent in creating and distributing an AI system rather than how end users actual use the AI system.

Safe Harbors and Affirmative Defenses

The Attorney General is required to provide written notice of a violation before initiating an action, giving a party 60 days to cure the violation and implement any policy changes necessary to reasonably prevent further violations.

TRAIGA provides affirmative defenses for parties that discover their own violations via: (1) feedback from a developer, deployer, or other person; (2) red-teaming or adversarial testing procedures; (3) following state agency guidelines; or (4) an internal review process, provided compliance with a nationally recognized risk management framework such as the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework,

Conclusion and Takeaways

While TRAIGA joins the list of recent legislative and regulatory efforts to regulate the development and use of AI, Texas’s new restrictions, prohibitions, and obligations for developing and deploying AI systems are more broadly applicable in scope and come with harsher penalties than most other regulatory regimes.

Developers and deployers have less than one year to ensure compliance with TRAIGA and to consider applying for TRAIGA’s regulatory sandbox program. Given how technically complex AI models are, companies that develop or deploy AI systems used or to be used in Texas should take a compliance-by-design approach.

Continue to follow Benesch’s AI Commission as we address the evolving regulatory landscape of AI, impacts of new regulations and steps toward compliance. Stay tuned!

Megan C. Parker at mparker@beneschlaw.com or 216.363.4416.

Kristopher J. Chandler at kchandler@beneschlaw.com or 614.223.9377.

With thanks to Benesch summer associate, Elias Najm (J.D. ’27 candidate, Case Western Reserve University School of Law), who assisted in preparing this article.

  • Megan C. Parker
    liamE
    216.363.4416
  • Kristopher J. Chandler
    liamE
    614.223.9377
  • Intellectual Property
  • Artificial Intelligence (AI)
Stay Current. Sign up for our eAlerts
>
  • 2025 Benesch
  • Disclaimers
  • Privacy Policy
  • Related Sites
  • GDPR Statement
  • Terms
  • Client Payment Portal
  • Careers
Twitter
Facebook
LinkedIn