A global coalition of over 100 organizations is urging lawmakers to reconsider a sweeping deregulatory provision in the Republicans’ Budget Reconciliation Bill, which would ban AI regulation in the US for a decade.
The latest version of the Republicans’ Budget Reconciliation Bill includes a clause that would ban all AI regulation in the US at the state level for a full decade. Over 100 organizations, including policy groups, academic institutions, unions, and other entities, are calling for lawmakers to walk back this sweeping deregulatory provision.
The bill’s proposal, which declares that ‘no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act,’ would allow AI companies to operate without safeguards or accountability. This, warns critics, could lead to negative impacts on American citizens in areas such as hiring, housing, healthcare, policing, and financial services.
The letter, signed by 141 organizations, emphasizes that even in cases where a company ‘deliberately designs an algorithm that causes foreseeable harm,’ the company making that bad tech would be unaccountable to lawmakers and the public. This is particularly concerning given the potential for transformative new technologies to be riddled with unknown, chaotic, and sometimes quite destructive outcomes.
Company accountability refers to a corporation's responsibility for its actions, decisions, and impact on stakeholders.
It involves being transparent about operations, financial dealings, and social practices.
Effective accountability measures include implementing strong governance structures, conducting regular audits, and engaging with shareholders and the public.
Research shows that companies with high levels of accountability experience improved reputation, increased investor confidence, and enhanced long-term sustainability.

Regulation can serve to fuel innovation, rather than stifle it, by providing a framework for companies to operate within. As noted in the letter, ‘Protecting people from being harmed by new technologies… ultimately spurs innovation and adoption of new technologies.‘ If people have a reason to trust AI systems, they are more likely to adopt them.
As lawmakers consider this bill, it is essential that they prioritize public safety and accountability. The future of artificial intelligence regulation must be guided by a commitment to protecting citizens from the risks associated with these technologies. Without robust safeguards in place, we risk unleashing a technology that could have devastating consequences for our society.
Public safety refers to measures taken by governments and organizations to protect citizens from harm, injury, or death.
This includes emergency services such as police, 'emergency responders' , fire departments, and ambulance services.
Statistics show that public safety efforts have led to a significant reduction in crime rates and mortality rates.
According to the World Health Organization (WHO), every dollar invested in public health returns an average of $3 in economic benefits.
Moreover, studies have shown that communities with strong public safety measures experience improved social cohesion and reduced poverty levels.
Artificial intelligence (AI) poses various risks, including bias in decision-making, job displacement, and cybersecurity threats.
A study by the McKinsey Global Institute found that up to 800 million jobs could be lost worldwide due to automation by 2030.
Moreover, AI systems can perpetuate existing social biases if trained on biased data, leading to discriminatory outcomes.
Additionally, the increasing reliance on AI has raised concerns about accountability and transparency in decision-making processes.