The latest wave of AI news reflects a landscape where innovation and controversy collide. Atlassian is highlighting the real-world, human-centered challenges of integrating AI into service management, even as broader conversations liken today’s AI advancements to a 'Black Mirror' episode—unsettling, disruptive, and rife with ethical implications. Nowhere is this more stark than in reports of Israel using US-made AI models in warfare, raising urgent debates about technology’s life-and-death decisions. On the tech frontier, AI-generated optical illusions are being deployed to distinguish humans from bots, while generative AI continues to reshape journalism, with OpenAI both 'uncensoring' ChatGPT and forging a high-profile partnership with the Guardian Media Group. ---- As AI makes headlines for both progress and blunders—ranging from 12 notorious AI disasters to the revelation that AI mistakes differ fundamentally from human errors—recent trials show humans still outperform AI in certain government tasks. Meanwhile, the research community pushes for robust governance and transparency, with Perplexity launching Deep Research and the UK controversially rebranding its AI oversight body to the 'AI Security Institute', dropping the word 'safety'. Energy use and economic impact debates rage on, with skepticism over a so-called 'AI energy crisis' and new indices like Anthropic’s measuring the sector’s true economic influence. ---- Software engineering is being upended by the rapid evolution of large language models (LLMs), as the LLM Curve of Impact predicts seismic shifts in the profession, possibly heralding '**the end of programming as we know it**'. AI’s limitations are also on display—AI chatbots still struggle to accurately summarize news, BBC content is being repurposed in AI assistants, and concerns mount over copy/paste plagiarism and rampant hallucinations. As Europe launches a €200 billion alliance to seize global AI leadership and Elon Musk’s group bids $97 billion to control OpenAI, the sector is abuzz with power plays and regulatory drama; notably, the US and UK have declined to sign recent international AI safety declarations. ---- Amid this frenzy, researchers are replicating OpenAI’s hottest tools in mere hours, and new frameworks like Constitutional Classifiers aim to encode explicit ethical rules directly into AI. However, security lapses—such as DeepSeek’s exposed, unauthenticated database—and failed safety guardrails illustrate persistent risks. Fei-Fei Li cautions that AI policy must be grounded in science rather than hype, and calls for Australia to seize AI’s potential are met with mixed enthusiasm. As we wrestle with the practical, economic, and philosophical implications of AI—from healthcare to everyday copy/paste plagiarism robots—one thing is clear: the future belongs to those who can not only imagine, but also act, in a world where thought partnership with machines is increasingly the norm.