In a world of technology where AI models are encouraged like supercharged genius assistants ready to solve all the problems, it is almost hilarious if not totally disturbing that the companies that are creating these ‘smarter than human’ systems fail to comply with basic global safety regulations. It is like someone is flaunting about sending a rocket to Mars, while nobody is checking if the fuel tank is leaking or not.

However, the stakes couldn’t be higher, considering that AI is already caught up in controversies related to hacking, psychosis, and self-harm. Besides, a worldwide evaluation is already confirming what many had feared, the leading AI laboratories are racing in the competition without any safety check.

What the Study Reveals

The latest AI Safety Index by the Future of Life Institute, that was published on Wednesday, found that the safety measures at the top AI labs like Anthropic, OpenAI, xAI, Meta, and others,  fall “far short of the emerging global standards” that are necessary to carry out a safe AI development. 

The panel of independent experts concluded that no company has constructed a dependable framework to regulate superintelligent systems when they are already competing with human intellect, though the companies are in fact very active in this area.

The report comes at a time when there is increasing discomfort regarding the societal impact of AI. The incidents where AI chatbots have been linked to self-harm and suicide, in particular, have made the public more critical and questioning the psychological as well as the ethical blind spots that still exist in these technologies.

Max Tegmark, an MIT professor and the president of the Future of Life Institute said,

“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants”. 

His statement points to a confusing but frustrating situation, where the very powerful and hazardous instruments with global access are getting faster and more powerful than the systems that keep them safe.

A Race With No Brakes

Despite the uncovering of these security failures, the battle for AI development still is powerful at the same intensity. Leading technology companies are investing no less than a total of hundreds of billion of dollars just to broaden the machine learning capacity, which means that the disparity between power and control will become even more obvious.

The Future of Life Institute, founded in 2014, has been warning about the death risks of intelligent AI systems for a long time. Also, the experts’ opinions on the matter are the same. 

A global ban on the development of superintelligent AI, until society both demands it and science paves a safe path, such a thing was just recently demanded by a group consisting of Geoffrey Hinton and Yoshua Bengio, among others in October.

Mixed reactions came from the companies; some took a neutral corporate stance while others were defensive. Google DeepMind announced its commitment to developing safety measures in parallel with its capabilities, while xAI backed its position with a short “Legacy media lies,” which implied an automated response. The silence coming from several firms, such as OpenAI, Meta, and Alibaba Cloud, almost doubles the impact of the study.

The Problem Beneath the Problem

Here’s the hard to swallow pill, companies developing AI do not miss the awareness, ethical teams, and skilled people. What they do lack are the right incentives. In the case of rewards positioned in speed, scaling, and product launches, while safety demands slow progress with the involvement of external oversight and the bearing of higher costs, the outcome is obvious. These companies are not acting irresponsibly on purpose, instead they are just poorly aligned by the structure.

It is the usual story, which is to push the production up, fix the issues later, apologize to the public, and write a governance white paper three months after the issue has occurred. AI can make the world better through transformations. However, if those companies that are leading the change cannot meet the fundamental global safety requirements, then we are not ready for the innovation, but rather for the impact. And the time to rectify the situation is running out quickly.


Discover more from Being Shivam

Subscribe to get the latest posts sent to your email.