
"This week I witnessed one of the sharpest contrasts in recent AI history: the sudden shutdown of Sora and the silent explosion of autonomous background tools."
This week I witnessed one of the sharpest and most revealing contrasts in the recent history of artificial intelligence. On one hand, the sudden shutdown of a product that had enchanted the world with its visual magic. On the other, the silent and unstoppable explosion of tools working in the background, grinding through code and tedious tasks on our desktops.
If there is a common thread in the notes I took over the last seven days, it is this: playtime is over. Passive conversational interfaces are giving way to true operational agents, capable of acting, clicking, and solving problems in total autonomy.
I have always evaluated technologies based on their real impact on time-to-market and daily efficiency. What is happening now in the labs of Anthropic, Google, and even Xiaomi, confirms that the real gold rush is no longer generating the perfect video, but building the infrastructure to delegate the dirty work.
The most unexpected move of the week came from OpenAI, which decided to definitively pull the plug on Sora just six months after its launch. An app that dominated the App Store charts is abruptly shut down, bringing with it the end of the newly signed three-year partnership with Disney.
I find this reversal an unmistakable signal. AI-generated videos are wonderful to watch on social media, yet they remain a nightmare to integrate into automated and measurable corporate workflows. I have always avoided including video generation in the production systems I design. The lack of precise control over the output and the constant risk of visual hallucinations make these tools completely unsuitable for rigorous operational processes.
The abandonment by Disney confirms a dynamic I have been observing for some time: enterprise companies run away at top speed when faced with the legal uncertainty related to copyright and unsustainable inference costs. OpenAI is clearly recalibrating its priorities, shifting the focus towards B2B automation. It is the fall of chaotic agents and the dawn of deterministic infrastructure, a mandatory step for anyone wanting to build scalable and defensible solutions on the market.
While synthetic videos collapse, data on American credit card transactions show that subscriptions to the Pro version of Claude have more than doubled. People willingly shell out 20 dollars a month, and the reason is simple: they find an actual increase in productivity in their daily workflow.
The real push comes from the "Computer Use" feature. Consumers are paying to have an agent available that can navigate, click, and act autonomously on the operating system. I myself have integrated Claude Code into my development ecosystem for coding logic in React and Next.js, and the results on release times are undeniable. We are witnessing the definitive transition from passive chatbots to operational agents.
As if that were not enough, a serious configuration error in Anthropic's internal CMS exposed confidential documents about their next flagship model, codenamed "Mythos". The drafts describe a system with drastically higher scores in software coding and cybersecurity, capable of exploiting vulnerabilities at unprecedented speeds.
Reading the details about the prohibitive inference costs of this new model, my attention is immediately sparked. It will be necessary to calibrate API consumption with extreme precision, reserving this "Mythos" exclusively for the most complex logical tasks. The idea of delegating entire architectural refactoring to an autonomous intelligence fascinates me, but it will require ruthless control over computing budgets.
To run these agents on a large scale, the hardware must evolve brutally. This week Google presented "TurboQuant", a memory compression algorithm that promises to reduce the RAM footprint for inference by up to six times. Wall Street's reaction was hysterical, with immediate stock crashes for giants like Micron and Western Digital.
I analyzed the paper and the implications are massive. Reducing the required memory means being able to run huge models on edge hardware or brutally halving the costs of cloud instances. I am waiting to test the first open source ports to evaluate if the output quality withstands the impact of extreme compression, but the direction is set.
On the security front, Nvidia launched OpenShell, a framework to confine the execution of agent code in ephemeral sandboxes. Until yesterday, I had to invent makeshift security layers to prevent an LLM agent from executing destructive queries in production. Now I have a native solution available to let artificial intelligences interact with databases and file systems without compromising data integrity. It is the missing piece to bring agentic AI into banking CRMs and legacy systems.
Meanwhile, Xiaomi shuffled the deck by releasing the MiMo models. The Chinese company is building an operational control layer for pure hardware, allowing agents to act directly on the operating system of phones, cars, and home automation, completely bypassing external APIs. AI steps out of the browser and takes control of the terminal in an increasingly pervasive and integrated way.
It is not just about code. Google released Gemini 3.1 Flash Live via the Live API, aiming straight at ultra-low latency voice interactions. I have always thought that response delay was the real hurdle for the massive adoption of voice agents in customer service.
With this release, the barrier drops drastically. The ability to prototype reactive multimodal agents in just a few hours opens huge scenarios for the automation of switchboards and customer support. I will test it shortly in my workflows to understand how it behaves under stress.
The market is finally punishing visual "toys" to reward the silent automation that cuts operational costs.
Beyond the main news, I always sift through the feeds to catch the underground movements of the industry. This week there have been several tremors that outline the coming months of development.
| News | My point of view |
|---|---|
| Escape from xAI | All 11 co-founders have left Elon Musk's company. A very strong signal of instability in a startup that aimed to compete with the giants. |
| Microsoft takes the OpenAI datacenter | The mega Texan facility abandoned by OpenAI passes to Redmond. Physical infrastructure remains the real bottleneck of the sector. |
| Apple distills Gemini | Cupertino is shrinking Google's models to run them on-device. Edge computing becomes the priority for privacy. |
| Karpathy on bottlenecks | The former director of Tesla AI stated that humans are the real limit in research. AI designing AI is getting closer. |
Theory is useless if it does not translate into executable code. Here are the tools that emerged these days that I have already started incorporating into my repositories. If you want to dive deeper into my usual stack, I refer you to the complete list of my AI tools.
The era of endless prompts to get formatted text is ending. We are entering the phase where we define the goal, provide the credentials, and let the model find the way to execute the task. And frankly, I could not wait.

My practical AI guide focused on real everyday work tasks: emails, reports, slides, data, and automation. Practical examples and ready-to-use prompts to save time and work better right away.

The artificial intelligence market is undergoing a genetic mutation, shifting away from lightweight API wrappers toward autonomous, open-source agents. Here is how local execution and enterprise infrastructure are radically changing the way I write code.

This week marked a clear watershed in how we think about and build artificial intelligence systems. I spent the last few days reorganizing my work pipelines because the news from major research labs literally wiped away months of widespread industry beliefs.

This week the AI market stepped on the gas with concrete tools that radically change system design. Meanwhile, treating AI as a magic wand for immediate corporate cuts proves to be an operational suicide.
AI Audio Version
Listen while driving or coding.

AI Solutions Architect
As an AI Solutions Architect I design digital ecosystems and autonomous workflows. Almost 10 years in digital marketing, today I integrate AI into business processes: from Next.js and RAG systems to GEO strategies and dedicated training. I like to talk about AI and automation, but that's not all: I've also written a book, "Work Better with AI", a practical handbook with 12 chapters and over 200 ready-to-use prompts for those who want to use ChatGPT and AI without programming. My superpower? Looking at a manual process and already seeing the automated architecture that will replace it.