


This week, there was an event that shook the world of AI and related technologies to a large extent- Anthropic pulled the keys of the Claude API away from OpenAI. As per sources who have talked to WIRED, the move was prompted by the fact that OpenAI had breached the terms of service of Anthropic.
Such a step has triggered discussions on competitiveness, fairness, and the morality of researching and developing through the use of competitor tools.


What brought about the cut-off?
The anthropic claims crossed the border of Claude; at least one version of the system, Claude Code, was being deployed in OpenAI to test and compare against an upcoming model, GPT-5. These tests covered some of the aspects, like code performance, creative writing, and safety-related written prompts involving harmful or sensitive issues.
OpenAI supposedly used the special developer access (API) to test rather than the standard chat interface, which provided more thorough testing. Nevertheless, the terms of service explicitly prohibit the use of its models to build a competitive product, perform any reverse engineering, or duplicate the services.
Anthropic interpreted the move by OpenAI as a direct breach since GPT-5 is perceived to excel in coding, which is the primary specialty of Claude Code.
OpenAI Responds:
In a statement, OpenAI said it made the move because testing competitors has become the standard in the tech industry to test safely and benchmark competitors.
Hannah Wong, the chief communications officer of OpenAI, added that even after the fallout, their API is still available to Anthropic though OpenAI sticks to its defense that its aim was a responsible assessment, Anthropic appears to be adamant on implementing the restriction. Still, they did add that they would supply some API access to safety tests, which is the same practice amongst rivals.


A Pattern of Competitive Gatekeeping:
This is not the first time it has happened in the technology industry. The technology giants have been limiting competitor access to their platforms for a long time.
A classic example is when Facebook shut off Vine as a means of getting its access to their network, and more recently, Salesforce blocked third parties’ use of some Slack APIs.
This is also not the first time that Anthropic has done such a thing. In March, it blocked a coding-oriented startup called Windsurf when it was rumored to be bought by OpenAI. This later became a failed deal, though it indicates a trend concerning defense mechanisms within a rapidly changing competitive AI market.


The use of fair use and ethics is an increasing concern:
This controversy over the Claude API identifies a new problem in the field of AI research: how to achieve an equal level of competition and unrestricted access. Although testing of other models is vital in terms of safety and development, the companies would like to keep their product and intellectual property safe.
Both Anthropic and OpenAI are influential actors in the AI sector, and the confrontation may be the beginning of companies turning more secretive and less collaborative.
The moment GPT-5 is released, we are likely to experience more drama between the AI-related firms both desire to lead in the performance, safety, and market shares. However, the border between adequate analysis and unhealthy exploitation is becoming narrower.
At least to the present, OpenAI will have to proceed without unrestricted access to Claude, and Anthropic has stated it will not accept what it perceives as abuses. The arms race between AIs is still ongoing, but the limits are being pushed not only by machines but by the creators behind them as well.
Discover more from Being Shivam
Subscribe to get the latest posts sent to your email.