
OpenAI and Anthropic have released their latest coding assistant models, Codex 5.3 and Opus 4.6, showcasing advancements in usability and performance. The article discusses the implications of these releases in the evolving landscape of AI agents.


A new benchmark called 'Halluhard' shows that even advanced AI models like Claude Opus 4.5 still hallucinate incorrect information in nearly a third of cases, highlighting ongoing challenges in AI reliability.




Since March, over 160 companies in New York have filed notices of mass layoffs, yet none attributed these cuts to technological innovation or automation, despite many adopting AI tools. This trend raises questions about companies' reluctance to acknowledge AI's role in workforce reductions due to potential reputational risks. Governor Kathy Hochul has mandated the Department of Labor to investigate whether AI is a factor in layoffs, making New York the first state to include this option in WARN filings. The lack of AI-related citations in layoff notices suggests companies may be avoiding the topic, even as significant layoffs occur at major firms like Goldman Sachs and Amazon. The article discusses the implications of this data collection initiative and the need for transparency in AI deployment, as well as proposed legislation to enhance reporting on AI-related job losses.










The EU has warned Meta over potential antitrust violations for blocking rival AI chatbots from its WhatsApp platform, claiming it abuses its dominant market position. This could lead to significant regulatory actions against the tech giant.
As nuclear treaties dissolve, researchers suggest a new approach to arms control using AI and satellite technology for monitoring nuclear weapons. Matt Korda from the Federation of American Scientists outlines a proposal for 'cooperative technical means' to replace on-site inspections with remote monitoring. This method would utilize existing satellite infrastructure to observe missile silos and production sites, relying on AI for data analysis. However, the proposal faces hurdles, including the need for cooperation among nuclear powers and the challenge of creating effective AI systems with limited training data. Experts warn that while this approach may be better than nothing, it raises concerns about the reliability and transparency of AI in such critical applications.