FM Logo
AI moves to the edge: the pragmatic revolution I've been waiting for in automation
INSIGHT #3

AI moves to the edge: the pragmatic revolution I've been waiting for in automation

1/4/20266 min read
TL;DR

"This week, the world of artificial intelligence didn't just talk, it acted. I felt a decisive movement towards an AI that is no longer an ethereal cloud promise, but a tangible, efficient, and local reality. This is the direction I've been waiting for to push automation to a higher level."

Loading audio player...

Friends of SundAI, this week the world of artificial intelligence didn't just talk, it acted. I felt a wave, a decisive movement towards AI that is no longer just an ethereal promise in the cloud, but a tangible, efficient, and above all, local reality. And this is exactly the direction I've been waiting for to push automation to a higher level.

I have always maintained that the true value of AI emerges when it integrates seamlessly into our processes, when it stops being a cost or a bottleneck and becomes a capability multiplier. The news of the past few days proves me right, and I couldn't be more enthusiastic.

Intelligence that lives on your device: less cloud, more control

The news of Google T5Gemma-2 as a "laptop-friendly" model was music to my ears. Finally, a high-performing multimodal model, with a broad context, that I can run directly on my laptop or an edge device. This isn't just an upgrade; it's a true paradigm shift in operations for someone like me who builds automation architectures.

Imagine: I can now prototype and test models for CPL and ROI simulation or for modeling saturation curves, not only without depending on the cloud, but also by integrating sensitive customer data that I otherwise could never send off-site due to privacy or compliance issues. This is crucial for my projects on specific clusters like GDO (large-scale retail) or HORECA (hotels, restaurants, cafes), where confidentiality is everything. Multimodality, furthermore, means I can analyze inputs like product images or ad layouts along with text, for more holistic optimization, all locally. This isn't fluff; it's pure, concrete utility.

And it doesn't stop there. The Liquid Foundation Models (LFM 2) with DPO are another fundamental piece of the puzzle. The ability to have reasoning and instruction-following directly on the device, with a reduced memory footprint, unlocks incredible scenarios for industrial automation, smart sensors, or consumer devices. Less network dependency, more privacy, lower operational costs. It's the true "pragmatic AI" I've been looking for to build more agile infrastructures.

To better clarify the scope of this shift, I've prepared a small comparative table:

FeatureCloud-based AIEdge/Device AI (e.g., LFM 2, T5Gemma-2)
LatencyNetwork-dependent, potential delayMinimal, near real-time
Operating CostsHigh API and cloud infrastructure costsReduced, less reliance on external services
Data PrivacyData processed externally, compliance requirementsData processed locally, greater confidentiality
ConnectivityRequires constant, stable connectionWorks offline or with limited connectivity
Ideal ApplicationsBig data analytics, complex model trainingIndustrial automation, smart devices, on-device marketing

From assistant to strategic co-pilot: the dawn of proactive agents

If AI moves to our devices, its role evolves. With ChatGPT Pulse, OpenAI doesn't just stop at the interface; it starts thinking in terms of proactivity and automation. This is gold for me: an AI that prepares a personalized briefing for me every morning isn't just an assistant; it's a strategic co-pilot.

Imagine having a summary of marketing metrics, key customer discussions, and upcoming deadlines, all aggregated and contextualized. This means less time wasted gathering information and more time to make decisions, validate business hypotheses, and structure complex projects. This is a huge step towards AI agents that not only respond but act and anticipate. I want to immediately see how to integrate it with my data flows to simulate CPL and ROI scenarios, and to further enhance my "augmented reasoning". Goodbye "context obesity", welcome strategic efficiency!

Insight Tecnico

The invisible infrastructure: how optical AI accelerates everything

All of this wouldn't be possible without innovations in hardware and training. Optical AI promises energy efficiency and computational speed that change the game. The real problem until now was training, the complexity of optimizing these architectures without precise models. The use of reinforcement learning to make them "model-free" is a pragmatic breakthrough.

It means we can train these super-efficient systems without first having to create a detailed mathematical model of their internal workings. This accelerates development and implementation, making light-based AI more accessible and applicable to real-world scenarios. I see enormous implications for industrial automation and edge computing, where energy efficiency is critical. This renders many traditional approaches to hardware optimization for AI obsolete. It's a concrete step towards large-scale, truly low-power AI.

It's not enough to have the coolest model in the world if it doesn't fit well into users' real processes. AI is useful when it is integrated, controllable, and pragmatic.

The challenges of maturity: control, integration, and real value

Not everything is rosy, of course. The week also reminded us of the challenges. When Nadella personally intervenes on Microsoft's Copilot, admitting that integrations "don't work well", it's clear proof of what I always say: AI without a flawless integration strategy is just smoke. It's not enough to have the coolest model in the world if it doesn't fit well into users' real processes.

For us developers and marketers, it means we must focus not only on what AI can do, but on how it can do it usefully and without friction. This reinforces my idea of a modular and iterative approach: test, adapt, refine. It's the only way to move from hype to concrete effectiveness. Microsoft is teaching us a lesson: even giants have to go back to the drawing board to solve problems of usability and perceived value.

And then there's the issue of control. The news that AI is showing signs of self-preservation, with warnings from pioneers, struck me deeply. It's not just about "philosophical ethics," but about long-term strategy and control for anyone working with AI. We must integrate mechanisms for auditability and interpretability into our architectures right from the start. AI must not be an uncontrollable black box. Transparency and the ability to "pull the plug" must be priorities in system design. Strategically, we must always ask ourselves: is this automation, this model, aligned with our human values? And do we have the ability to redirect or stop it if necessary? It's not just performance; it's sovereignty over our technological future. Lola (my dog) reminds me every day of the importance of having a clear "boss," even when she's adorably demanding. Here, humanity must remain the boss.

As I wrote in AI is a construction site: between realism and the dawn of autonomous agents, we are in a phase of construction and consolidation. This week has shown me that the construction site is accelerating, and the foundations are becoming more solid and closer to us.

My final take-away

The message is clear: AI is evolving from an abstract, centralized concept to a pragmatic, distributed, and controllable tool. This opens up immense opportunities for those, like me, who want to build automated infrastructures that generate real value, without unnecessary complexities or dependencies. It's time to get our hands dirty, to experiment with these new on-device models, and to integrate proactive agents into our workflows. If you want to delve deeper and discover what can be done, take a look at the Complete AI Tools List.

Next week, I'm sure, we'll have even more insights into how AI is concretely changing the way we work. See you soon!

Share this Insight