Client Alerts & Insights
The U.S. is Scrambling to Regulate AI Without a Comprehensive Plan
August 9, 2023
Authored By:
While the European Union sprints ahead with draft legislation to closely regulate artificial intelligence (AI), the United States (U.S.) lags behind its western counterpart, leaving little regulatory guidance for U.S. AI companies. But introductory bills, statements, reports, and plans provide a glimpse into what AI regulation in the U.S. may look like in the future.
On Capitol Hill, Senate Majority Leader Chuck Schumer is making a high-profile push for broad federal AI intervention. In June 2023, he unveiled a framework to regulate AI that begins with forums for federal lawmakers to learn about the technology’s complexities. It follows with the drafting of bills. While critics warn that such forums may delay the legislative process, Schumer’s framework is expected to add a new level of impact to this issue which needs swift, focused attention.
Although no sweeping policy exists yet, U.S. senators have introduced a variety of piecemeal bipartisan AI bills in 2023. The Global Technology Leadership Act focuses on the country’s competitiveness in AI compared to its rival countries. The Transparent Automated Governance Act requires federal agencies to be transparent about their use of AI. And in an effort to protect consumers from harmful content produced by AI, two senators brought forth the No Section 230 for Immunity AI Act. If passed, the Act would give Americans who have been harmed by generative AI the power to sue AI companies in state or federal court.
Meanwhile, the White House has introduced a series of regulatory guidelines called a “Blueprint for an AI Bill of Rights,” updated its roadmap which addresses goals for federal investments in AI research and development, and signed an executive order tackling bias and algorithmic discrimination in AI. The Blueprint, developed by the White House Office of Science and Technology Policy, identifies five voluntary principles to shepherd the design and use of AI.
In late July, President Biden announced that his administration secured voluntary commitments from seven American companies, including Google, Amazon, Microsoft and Meta, meant to ensure their AI-related products are safe before releasing them to the public. While the voluntary commitments are a start and meant to be an immediate way to address these risks ahead of formal regulations, some critics say much more needs to be done to keep companies and their products accountable.
Federal committees and agencies are also launching their own AI initiatives in 2023. The National Artificial Intelligence Advisory Committee (NAIAC), which advises the President and the National AI Initiative Office, released its first report to the President in May. The report details how the U.S. government can utilize the benefits of AI and mitigate its dangers, outlining four major themes with more than a dozen objectives and two dozen recommended actions. It also states that the committee will concentrate its efforts on generative AI over the next two years. Similarly, the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency, released an AI Risk Management Framework which recommends how to manage the AI-related risks to individuals, organizations, and society. The Framework is expected to be taken seriously by the federal government, if not adopted as an industry standard.
The Federal Trade Commission (FTC) is taking a tougher stance on biometric technology, which is increasingly using AI. Previously regulated by state or local laws, or none at all, the FTC now states that certain uses of biometric technology, which authenticates individuals based on their human body characteristics, may violate the FTC Act. As a result, the FTC’s new stance may cause a jump in biometric privacy litigation as plaintiffs bring claims under their state’s unfair and deceptive trade practices act.
At the state level, six states (California, Colorado, Connecticut, Illinois, Maryland, Virginia) have enacted laws or will have enacted laws by the end of 2023 making it illegal for businesses to use AI to discriminate or deceive consumers or job applicants. Nine states (California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Utah, Virginia) have comprehensive data privacy laws in place. At the local level, New York City now has an ordinance that regulates how AI is used in the hiring process. To learn more about these new data security and privacy laws, please see visit Data Meets World, where Benesch discusses all things data privacy and data security.
While U.S. AI policies may be lacking, transatlantic AI partnerships are beginning. In June 2023, President Joe Biden and U.K. Prime Minister Rishi Sunak agreed to work together to address the dangers of AI and leverage the technology’s opportunities. Learn more about how the United Kingdom plans to govern AI here.
For additional information, please contact:
Kristopher J. Chandler at kchandler@beneschlaw.com or 614.223.9377.
Latest News
Medical First, Recreational Later? DOJ’s Cannabis Order and the Stakes Ahead
DOJ’s April 2026 order immediately moved FDA-approved and state-licensed medical cannabis to Schedule III, removing harsh tax penalties for medical operators, while leaving adult-use cannabis under stricter Schedule I controls pending further administrative review.
New Sentencing Guidelines for Economic Crimes Effective November 1, 2026
In March we reported on the U.S. Sentencing Commission’s proposed amendments to the Federal Sentencing Guidelines (the “Guidelines”). On April 16, 2026, the bipartisan United States Sentencing Commission (the “Commission”) voted unanimously to adopt that package of amendments, without modification.
Judicial Green Light: Court Upholds NLRB’s Cemex Decision
On April 21st, 2026, the U.S. Court of Appeals for the Ninth Circuit upheld the National Labor Relations Board’s (“NLRB”) decision in Cemex Construction Materials Pacific, LLC., reinforcing a significant shift in federal labor law governing union recognition and employer conduct during organizing campaigns.
The LEAD Model—Kidney Care’s Value-Based Care Journey LEADs Here
The new LEAD Model, launching in 2027, is CMS’s next-generation value-based care framework for kidney care, integrating CKD and ESRD patients into standard ACOs with a 10-year benchmark period, new payment options and greater flexibility for nephrology-led organizations.