The growing controversy on AI safety has grown following Anthropic’s decision to invest $20 million on AI governance systems in the midterm elections in 2026, sparking a sharp political confrontation with technology companies and advocacy groups that favor deregulation. Anthropic has announced the donation to Public First Action, a group that supports AI safeguards.
Anthropic said in its statement, framing its political spending as aligned with responsible governance of emerging technologies.
The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests
The Major Donation
On 12 February 2026, Anthropic announced publicly a large financial investment in Public First Action, a group led by former members of parliament Brad Carson and Chris Stewart.
The first money was sent in the six-figure advertising campaigns in support of republican nominees Marsha Blackburn, who challenged the Tennessee governorship on a child-online safety platform and Pete Ricketts, following her advocacy of a ban on advanced semiconductor exports in the United States to China.
Public First Action will support 30-50 candidates over both sides of the partisan divide and has a total funding goal of $50-$75 million.
This is significantly less than the amount of money held by the competing pro-AI political action committee, Leading the Future, which boasts of a war chest of $125 million, contributed by OpenAI Greg Brockman, Andreassen Horowitz, and Perplexity.
Electoral Implications and Prognosis
Since Leading the Future has greater financial resources, it can achieve stronger and more influential lobbying outcomes. Nevertheless, there is a probability that the 2026 elections could produce a coalition of lawmakers supportive of both AI innovation and standardized safety regulations.
This uniformity will be achieved in the case when the population will be less susceptible to change and industry executives will have no choice but to consider safety implications in more accurate deployment.
The activity of Anthropic therefore indicates the maturity of the industry in which democratic responsibility can be used as a mechanism to prevent un-checked or destabilizing expansion of advanced AI capabilities.
Discover more from Being Shivam
Subscribe to get the latest posts sent to your email.
