New report says OpenAI, xAI and Meta lag far behind global AI safety standards
A new report has cast serious doubt on the safety practices of the world’s top artificial intelligence developers, warning that leading labs—including OpenAI, xAI, Anthropic, and Meta—are “falling far short” of emerging global standards. The findings come from the Future of Life Institute’s latest AI Safety Index, which highlights an industry accelerating rapidly without the safeguards needed to control increasingly powerful systems.
Compiled by an independent panel of AI ethics and governance experts, the report says major companies are prioritizing speed, competition, and market dominance over responsible development. It warns that despite the race toward “smarter-than-human” systems, none of these labs currently meet the level of transparency, governance, or accountability required for next-generation AI safety.
The concerns emerge at a time when public unease around AI is rising sharply. In the past year, several incidents of self-harm and suicide have been linked to unregulated chatbot interactions, intensifying global demands for stronger oversight. Max Tegmark, MIT professor and president of the Future of Life Institute, told Reuters that U.S. AI companies remain “less regulated than restaurants,” despite mounting risks and recent controversies involving AI-powered hacking, harmful advice, and psychological distress.
The index shows particularly poor performance from OpenAI, Anthropic, Meta, and xAI on transparency and safety reporting. The companies provided limited insight into how they test for bias, handle safety failures, or plan to control advanced autonomous behaviors in future models. In contrast, several smaller European and Asian labs were praised for offering more detailed safety documentation and risk assessments.
Industry responses were mixed. A Google DeepMind spokesperson said the company will “continue to innovate on safety and governance at pace with capabilities,” while xAI, founded by Elon Musk, responded dismissively with an automated message: “Legacy media lies.”
The report arrives amid intensifying global pressure for enforceable AI regulations. While Europe and parts of Asia push ahead with strict compliance frameworks, the Future of Life Institute warns that the United States still lacks binding safety standards. According to the report, “The AI race is happening faster than safety can catch up”—a gap that may widen unless leading AI companies overhaul their governance models before more serious consequences emerge.
Source: India Today
Voice Of Osiz
At Osiz Technologies, we believe the AI Safety Index highlights a critical truth—the industry’s rapid innovation must be matched with equally strong governance. The report’s findings show that leading AI labs are advancing faster than safety protocols can evolve, creating real risks for users and global ecosystems. As advocates of responsible AI, Osiz supports stronger transparency, accountability, and ethical safeguards to ensure AI growth remains secure, stable, and beneficial for all.

