Navigating the Future of AI

Advocacy Updates, News & Views Articles,

Without a federal framework, artificial intelligence faces a patchwork of state laws that could hinder innovation and weaken U.S. competitiveness.

By Ethan Gibble
Contributing Writer

As artificial intelligence (AI) continues to evolve rapidly, it is becoming even more deeply embedded in business operations and customer experiences. To maintain pace with the speed of that adoption, legislative bodies around the country are moving quickly to regulate AI, with an emphasis on consumer protection, labor law and copyright concerns. As nations throughout the world move into uncharted territory, industry leaders are warning that the reactionary legislative frenzy that’s already seen more than 1,000 AI bills introduced in the United States may do more harm than good.

The U.S. Chamber of Commerce has been among the loudest voices to urge caution by calling for a federal moratorium on AI regulation. The Chamber argues that A 50-state patchwork of inconsistent laws, the Chamber contends, would not only hamper innovation and force compliance burdens on small businesses but would also put the United States at a competitive disadvantage globally.

“The latest tracker I’ve seen showed that over 1,070 bills have been introduced on the state side,” said Michael Richards, executive director of policy at the U.S. Chamber of Commerce Technology Engagement Center. “All these bills have different definitions and different scopes, which makes it very, very hard for the mom-and-pop shops who don’t have compliance departments to understand what the rules and regulations are going to be, state to state.”

Importance of a Moratorium

Back in June, the House Energy and Commerce Committee proposed a 10-year moratorium on state AI enforcement in its budget reconciliation bill. While the Chamber endorsed the pause, it also supported an amendment from Sens. Ted Cruz (R-TX) and Marsha Blackburn (R-TN) that would have cut the moratorium down to five years. In the end, all versions of the moratorium language were removed during the vote-a-rama phase of the legislation that became the One Big Beautiful Bill Act.

It was not the outcome the U.S. Chamber and other business advocacy groups had hoped for. “A 50-state patchwork is just going to be very unhelpful,” Richards said. “It’s going to burden businesses with compliance requirements and limit the ability to compete. We’ve done a lot of work on the Chamber side in continuing to call for a federal framework. The most important thing to us is giving the federal policymakers the time necessary to develop that framework.”

With the moratorium language removed, states began moving ahead with their own approaches, setting up the precise fragmented regulatory landscape that Richards warns against.

Forging Ahead with Fragmentation

Without a federal moratorium in place, the Chamber continues to advocate for risk-based policies that protect innovation and small businesses. New Mexico House Bill 60 and Nebraska Legislative Bill 642, for example, aimed to ensure that AI usage is ethical and responsible. The Chamber commended the laudable goal, but it also had concerns with how the legislation conflicted with current state laws. The New Mexico legislation ultimately did not pass, and the Nebraska legislation has stalled in committee.

Although those bills have failed to gain traction, local and state laws continue to be passed and will serve as informative case studies on the benefits and detriments of fragmented regulation. New York City has Local Law 144 that went into effect in 2023 to regulate the use of automated employment decision tools. In June, the California Civil Rights Council secured approval for regulations that protect against employment discrimination related to artificial intelligence and automated hiring systems. These regulations, which are set to go into effect on Oct. 1, 2025, are intended to prevent situations such as a hiring algorithm that rejects women applicants because it is trained to screen for job seekers that mimic the existing features of a company’s male-dominated workforce. Although unintended, such a situation could reinforce existing biases and contribute to discriminatory outcomes, according to the California Civil Rights Council, making employers liable for violating the state’s civil rights laws.

With the passage of Senate Bill 205 in 2024, Colorado became one of the first states to enact full-scale, statewide AI regulations. The law requires developers of high-risk artificial intelligence systems, such as those used to make employment or credit decisions, to use reasonable care to protect consumers from any known or reasonably foreseeable risks of discrimination stemming from algorithms. “Colorado’s bill does not go into effect until Feb. 1, 2026, so there’s still a roll-up time for that,” Richards said. “But I think it’s notable that one of the biggest state policymakers asking for a moratorium was Jared Polis, who is the governor of Colorado.”

In a statement published on the day he signed the bill, Polis expressed reservations about how it could stymie AI development in Colorado. “I am concerned about the impact this law may have on an industry fueling critical technological advancements across our state for consumers and enterprises alike,” Polis said. “Government regulation applied at the state level across the country could stifle innovation and deter competition in an open market.” Despite his misgivings, Polis explained that the guardrails and lengthy timeline for implementation included in the bill were enough to earn his signature. Still, he hoped the passage of Senate Bill 205 would highlight the importance of having a conversation about AI regulations at the national level.

Current State of Federal Law

When it comes to AI oversight, Richards emphasized the Chamber is not opposed to all regulation. Rather, it is in favor of smart governance guided by risk analysis, particularly given that many existing laws already address AI-related issues. “We understand there are already rules and regulations on the books when it comes to artificial intelligence,” Richards explained. “Even people like Lina Khan (former chair of the Federal Trade Commission) have indicated that AI was not developing in a legal vacuum. We have been calling for enforcement of current laws but obviously where gaps exist, there need to be rules and regulations put in place that are risk-based.”

A good example of targeted, sensible legislation is the TAKE IT DOWN Act, which prohibits deepfake and exploitative AI-generated images. That bill was signed into law this past May with the full support of business advocacy groups like the U.S. Chamber. Richards notes that the organization is closely following bipartisan work to see what kind of consensus on similar measures can emerge federally.

The divisions between Democrats and Republicans may be deeply entrenched on issues like tax reform, but there does appear to be a more collaborative approach so far on AI. Last year, the House organized an AI task force that was co-chaired by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA). Meanwhile, the Senate set up the Senate AI Insight Forum, a series of invite-only meetings led by Sens. Chuck Schumer (D-NY), Todd Young (R-IN), Martin Heinrich (D-NM), and Mike Rounds (R-SD) that sought input from tech industry leaders, academics and union representatives.

“They came out with what they called an AI roadmap with a bunch of policy recommendations,” Richard said. “What we’re hoping for is the House and Senate being able to work together to move the ball forward. We’re looking forward to working with them to see these recommendations put into policy.”

As Congress develops its own recommendations, the White House unveiled America’s AI Action Plan in July. Intended as a roadmap for American leadership in the emerging artificial intelligence field, the document outlines broad policies that the federal government and federal agencies should pursue in three overarching areas: innovation, infrastructure, and international diplomacy and security.

The plan received early praise from business advocacy groups, including the National Association of Wholesaler-Distributors (NAW) and the U.S. Chamber. In particular, NAW commended the inclusion of several of its recommendations to the Trump administration, including:

  • Setting up a federal framework to promote long-term AI innovation and infrastructure development through engagement with industry stakeholders, such as wholesaler-distributors
  • Leveraging existing federal laws and funding programs to reduce the regulatory complexity created by inconsistent state AI rules
  • Clarifying potential legal barriers to adoption, including a review of past Federal Trade Commission (FTC) investigations
  • Developing workforce strategies that prioritize AI skills development and identify high-priority occupations for AI readiness
  • Updating tax guidance to confirm that AI training programs may qualify as eligible educational assistance under Section 132 of the Internal Revenue Code.

“NAW looks forward to continuing to work with the administration to ensure the outcomes from the action plan support further AI deployment and adoption across the wholesale distribution industry,” the association said in a statement.

Pillars of Federal Regulation

Each of these proposals represents a step toward what business advocacy groups hope will become a comprehensive federal AI regulatory framework that pre-empts state laws and allows U.S. companies to operate under a consistent set of national rules. But with so many different ideas for federal regulation, what are the guiding principles that can successfully drive common sense legislation across the finish line?

The Chamber itself had established an AI Commission tasked with providing independent, bipartisan recommendations to assist policymakers. Over the course of a year, the commission met with more than 87 expert witnesses across the country and overseas, while also gathering feedback from stakeholders who answered three separate requests for information. In doing so, it identified five principles that should be at the core of AI regulation:

  1. Efficiency: Existing laws and regulations must be considered, with a focus on filling in gaps to address new challenges.
  2. Collegiality: Federal interagency collaboration is essential, given AI’s complex and rapidly evolving nature. A coordinated strategy enables agencies to leverage their expertise and address the most pressing issues within their domains.
  3. Neutrality: Laws must be tech neutral, focusing on applications and outcomes rather than the technology itself. New laws should address gaps, protect rights and build public trust while favoring industry-specific guidance over a one-size-fits-all approach.
  4. Flexibility: Laws should support risk assessment and innovation led by the private sector. Non-binding approaches like soft law and best practices developed by experts, civil society, and government will provide the flexibility needed to keep pace with rapid technological change.
  5. Proportionality: Policymakers should address legal gaps with a risk-based approach, ensuring balanced and proportionate regulation.
Staying Ahead of Global Competition

As legislators consider how to approach AI domestically, they must also consider the competitive threats American businesses are facing from other countries that are investing heavily in the technology. Disjointed regulation at home, Richards argued, will not only hinder U.S. innovation but also its position as a global leader. “We are in a strategic race with China when it comes to artificial intelligence,” he warned. “That 50-state patchwork is going to be a burden on businesses as they’re trying to compete. Not having that federal framework puts us at a disadvantage internationally as well.”

A Wall Street Journal article from July 2025 noted that China is quickly eroding America’s lead in the global AI race. While not able to rival the United States’ advantages in semiconductors, research and access to financial capital, Chinese companies such as DeepSeek are already gaining traction by offering comparable performance at much lower prices — the same strategy that has been effective at making inroads into many other industries.

Increased Threat to Intellectual Property and Sensitive Data

Business concerns with AI go well beyond regulatory considerations. The more widely AI is used, the more frequently AI companies wind up as defendants in litigation. Whether it be authors, publishers, artists, or music labels, there are dozens of lawsuits winding their way through the courts right now as individuals and businesses aim to protect their copyright material from being compromised by AI or used as training data. One of the most prominent lawsuits was filed in June, when Disney and Universal sued Midjourney, alleging that the AI image generation startup was a “bottomless pit of plagiarism” due to its ability to mimic artistic styles and replicate well-known characters.

While AI legal proceedings of this magnitude may not be the norm, it does speak to the changing landscape that companies of all sizes must navigate when using AI to improve their operating efficiency and customer experience. The advantages created by AI come with potential risks, such as proprietary content, trade secrets and customer data being ingested and reproduced without licensing.

The Road Ahead

AI is evolving exponentially, often more quickly than laws and regulatory agencies can react. “Advancements in artificial intelligence are coming rapidly, with new model releases arriving quarterly at this point,” Richards observed. “You may pass a law that will quickly become out of date based on where the technology is headed.”

From the perspective of business advocacy groups like the U.S. Chamber, lawmakers must resist policies that would choke advancement through fragmented AI overregulation. Instead, legislators should commit to a unified, risk-based federal framework. Such an approach would provide regulatory clarity while better balancing the need to implement safeguards with giving businesses and AI developers the room to innovate. Competitors across the entire world are working to push AI technology forward and implement it in new and revolutionary ways. For American businesses, the stakes could not be higher: either policymakers craft thoughtful, adaptable rules that encourage responsible growth, or they watch opportunity migrate to the countries that do.