Texas Responsible Artificial Intelligence Governance Act (TRAIGA)
Phase II: High-Stakes AI Deployment Safeguards
Current Version: v1.1 (Updated December 2025)
A targeted deployment-level add-on to House Bill 149
Intro
This page presents the current version (v1.1) of a targeted Phase II add-on to the Texas Responsible Artificial Intelligence Governance Act (HB 149). While HB 149 establishes statewide rules governing artificial intelligence systems—including prohibited uses, transparency requirements, enforcement authority, and a regulatory sandbox—this proposal addresses a specific remaining gap: artificial intelligence systems that are configured to initiate or execute real-world actions in high-stakes contexts. The add-on applies only at deployment, preserves innovation by excluding research, model training, and open-source development, and uses existing enforcement structures. Its purpose is to ensure that when artificial intelligence systems are permitted to act with real-world consequences, those actions remain subject to meaningful human authorization, auditability, and fail-safe safeguards.
The Problem
Artificial intelligence systems are increasingly deployed in areas with significant consequences for Texans, including healthcare, employment, financial services, housing access, public benefits, infrastructure, and core government operations.
House Bill 149 (TRAIGA) establishes important statewide protections and governance mechanisms for artificial intelligence. However, it does not explicitly address agentic AI systems—systems that are configured to initiate or execute actions without contemporaneous human approval.
In high-stakes settings, this creates an accountability gap:
who is responsible when an automated system acts, errs, or causes harm?
The Solution
This proposal adds a narrow deployment safeguard to TRAIGA that applies only when AI systems are permitted to take or trigger high-stakes actions.
The add-on requires:
Meaningful human authorization before consequential actions
Audit logs and traceability for actions and overrides
Fail-safe defaults when systems encounter uncertainty or anomalies
Ongoing monitoring and incident response
Human review and explanation rights for affected individuals
The focus is on how AI is deployed, not how it is built.
What This Proposal Does Not Do
This add-on is intentionally limited in scope. It does not:
Ban artificial intelligence
Regulate AI research, training, or compute
Restrict open-source models
Create a new AI agency or bureaucracy
Regulate speech or content moderation
It applies only to deployment-level use of high-stakes, action-taking systems.
-
(All materials below reflect the current version: v1.1)
📦 Complete Legislative Packet
Download:TRAIGA Phase II – Legislative Drafting & Briefing Packet (PDF)
(All materials combined for review and circulation) -
How This Helps Legislative Offices
Provides clear amendment-ready legislative language
Supports committee preparation and testimony
Lays out statutory context for efficient legal review
Offers ready-to-use materials for stakeholder engagement
Prepared By
Kevin Bihan-Poudec
Dallas, Texas (75219)
Voice For Change Foundation
info@voiceforchangefoundation.org
www.voiceforchangefoundation.org

