Where AI Regulation Stands Today in the U.S., According to a Lawyer

Raymond Sun

Published:

In the world of AI, the United States holds the top position in research and technology.

Where AI Regulation Stands Today in the U.S.

Despite this leadership, the nation's approach to regulating AI is messy – to say the least. Characterized by a patchwork of laws and non-binding frameworks.

As a lawyer specializing in emerging tech law, I’ll break down below the reasons for this fragmented state and why it may not be as detrimental as it seems.

Click Here to Subscribe to HubSpot's AI Newsletter

The Current State of U.S. AI Regulation

As of January 2024, the U.S. doesn’t have a binding federal law specifically regulating the development, deployment and use of AI.

While the country does have a few AI-related laws, they mostly relate to specific administrative matters within the federal government, offering little to no relevance outside the public sector. Like the AI Training Act, for example.

Instead, the landscape is defined by a fragmented patchwork of existing laws that could apply to specific risks of AI on a case-by-case basis – think privacy, discrimination, and IP theft. But nothing that directly regulates AI technology itself.

That said, several government agencies have frameworks and materials to promote responsible design, development or usage of AI. Here’s a quick timeline:

Biden’s order was a landmark move on AI to date.

Using his presidential powers, he directed federal agencies to implement various strategic, regulatory, and budget initiatives, expanding on the earlier Blueprint for an AI Bill of Rights.

Notably, the Order invoked the Defense Production Act, which gives the President the power to order companies to produce goods and services to support national defense.

In this case, President Biden required organizations developing foundation models with serious risks to national security and public health to notify and report safety test results to the government.

The order also called on Congress to enact a data privacy law to govern AI use by federal agencies, and directed multiple federal bodies to develop guidelines for AI in specific sectors, such as:

  • Healthcare
  • Education
  • Public security
  • Energy
  • Commerce
  • Criminal justice
  • Defense

At the state level, regulation is also messy and complex, with each state doing (or not doing) their own thing.

According to the AI Index Report 2023, the top 3 states – by number of AI-related bills passed between 2016 and 2022 – are Maryland, California and Massachusetts, which makes sense considering they’re hubs for AI research and innovation.

However, most of these laws relate to specific contexts like online safety and employment. This

inconsistency in regulation across the country makes it difficult for businesses with operations across the country.

Why is U.S. AI regulation so complicated ?

There are three key structural factors to consider.

First, the Constitution divides power between federal and state governments – which complicates a unified AI regulation approach.

For example, the federal government manages matters including defense, foreign policy and interstate commerce, while states handle issues including education, public health and criminal justice.

But AI intersects with various areas under the jurisdiction of different authorities, creating a complex and fragmented landscape.

Second, the US’ legislative process – which requires laws to be approved by both houses of Congress – creates difficulties in passing laws, especially in the rapidly evolving field of AI. According to the AI Index Report 2023, only 10% of federal AI-related bills in the U.S. were passed into law in 2022.

Such a low pass rate gives context into why the government has preferred non-legislative approaches to govern AI, like President Biden’s recent executive order.

Third, the U.S. tech industry wields considerable influence over regulatory discussions on AI, due to its significant contribution to the nation’s’ GDP and AI developments.

That’s why the government – keen on preserving its global AI leadership – closely monitors this sector's interests and opinions.

This also explains why the government has opted for voluntary frameworks that largely allow the tech industry to self-regulate and avoid any potential undermining of its position.

Predictions for AI Regulation in 2024

While self-regulation is easy on companies and can encourage good market practices around AI, it should only be a temporary solution.

Entrusting consumer safety in profit-driven commercial entities can leave consumers vulnerable to abuse and manipulation in the long run.

But given the structural challenges highlighted above, self-regulation seems to be the only feasible option for now.

And this approach will likely continue until we see some massive lawsuit, scandal or tragedy involving AI which shakes the nation and unites Congress to push out AI-specific regulations.

At that stage, I predict the U.S. government will roll out a mix of licensing and consent regimes to ensure Americans are protected from severe AI harms and risks.

For example, the government might step in to require organizations to register for a license to be able to build any powerful foundation models.

In fact, this idea has already been proposed in the US AI Act blueprint unveiled by senators Richard Blumenthal and John Hawley, though it’s too early to say whether this blueprint will pass into law in the current political environment.

Alternatively, the government could create new or update existing laws to give us as consumers more rights and control over how an AI system treats us and our data.

This could either be done via the constitution (e.g. codifying the Blueprint for AI Bill of Rights into the constitution) or via legislation.

One example of this is a bill proposed in 2023 called the NO FAKES Act, which seeks to give consumers the right to their own images, voices and visual likeness.

This would require deep fake artists to obtain users’ consent before replicating their voices and likeness. But again, note that the bill itself does not mention regulating the technology itself.

But for the most part, I expect the government to leave it to the regulators and agencies to tackle AI.

Does this 'messy' framework work in the U.S.?

From the FTC’s investigation into OpenAI for potential violation of consumer protection laws to the U.S. Copyright Office’s review into copyright issues around generative AI, the regulators have already been quite active in the AI space, and their activity will likely increase as regulatory frameworks get strengthened.

That said, while there may be new laws and frameworks, the U.S.’ AI regulatory landscape will likely continue to stay messy.

AI has so many applications across industries, which makes it difficult to have one clean set of rules.

For the US, perhaps ‘messy’ is the way to govern.

Editor’s Note: This article is not intended to provide legal advice or to be a comprehensive guide or reference.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE