A field perspective on edge computing, partnerships, and operational AI
By Johannes M. Biermann, President & COO aicas
CES is known for its extremes. On one end, eye-catching consumer products and futuristic demos draw crowds, such as the large number of robots and humanoids in action. On the other, teams are having more intimate discussions with each other, trying to solve very practical problems. While many conversations revolved around what AI might do one day, the more interesting question felt increasingly more pressing: how intelligence is being embedded into machines, vehicles, and industrial systems today, and what it takes to operate those systems reliably over time. In fact, the key question across the board seemed to be what operational benefit AI can have and what difference it can make to our operations and daily processes today.
Strolling through the halls, especially those focused on automotive, mobility, and industrial technology, it became clear that AI is no longer viewed primarily as a digital assistant or a cloud feature. Instead, intelligence is moving into physical systems that operate under real constraints. Latency, bandwidth, costs, safety, and lifecycle responsibility are no longer secondary considerations. They are shaping architecture decisions right now.
From Spectacle to Systems: What Physical AI Really Demands
Physical AI was the dominant buzzword this year, often paired with robotics and autonomy. While humanoids, industrial robots, robot dogs, and autonomous machines attracted attention, the more relevant shift was less visible. Companies are increasingly grappling with making intelligent systems now augmented by AI reliable, scalable, and economically viable beyond the demo stage. There is strong momentum around robotics, but also a clear awareness that deploying autonomy at scale remains a complex operational challenge.
CES 2026 highlighted that physical AI is ultimately a systems challenge. Intelligence in the physical world must behave predictably, recover gracefully, and evolve safely over time. This puts new demands on architecture, testing, validation, and operational governance.
Autonomous machines were ubiquitous across the show floor, yet deployment timelines remain uneven. Some industrial players appear more advanced in translating autonomy into repeatable operations, while many others are still navigating the gap between demonstration and sustained real-world use. The question is no longer whether autonomy works in principle, but under which conditions it can be deployed responsibly, economically, and at scale.
Why the Edge Has Become the Center of Gravity
A consistent talking point throughout CES 2026 was that edge computing has become the primary focus of innovation. Whether physical AI, edge AI, or agent-based systems, the conclusion was similar: intelligence needs to run closer to where data is created.
This shift is not driven by a loss of relevance for the cloud. Rather, physical systems demand responsiveness, resilience, and cost control, none of which can be achieved through centralization alone. This became especially apparent in discussions around software-defined vehicles, advanced driver assistance systems, and long-lived platforms that must evolve safely over time in automotive and embedded environments.
Another notable change came in regards to who set the tone on the show floor. While many OEMs maintained a relatively low booth presence, their teams were seen everywhere, actively scouting technologies, partners, and execution enablers. In contrast, suppliers and technology providers used CES to demonstrate production pathways rather than concepts. Across automotive and industrial domains, the message was consistent: reducing costs, accelerating time-to-market, and proving that solutions can be operated at scale matter more than showcasing the next flashy prototype.
Execution Happens in Ecosystems
CES also reiterated the importance of partnerships. No company is developing software-defined vehicles, autonomous systems, or physical AI solutions independently. Ecosystems matter, and CES 2026 made that tangible.
Conversations with long-standing customers and partners focused less on vision statements and more on alignment. How do platforms integrate? How do systems behave in the field? How are updates, validation, and governance managed over time?
Discussions with Siemens highlighted how industrial automation, software platforms, and marketplaces are converging to accelerate adoption across complex environments. Exchanges with long-term technology partners such as NXP and AWS underlined the importance of deliberately aligning edge and cloud architectures, as AI becomes more operational and distributed.
Meetings with Qualcomm revealed a growing trend towards platform-based approaches in the automotive and embedded systems industries, where performance, lifecycle management, and ecosystem compatibility, and not to forget the importance of data use cases, are becoming increasingly important. Meanwhile, conversations with start-ups like ATYM, illustrated a broader CES pattern: pragmatic integration paths between startups and established players are critical to reducing time-to-value and enabling real customer deployments.
Industry networks played an equally important role. The COVESA reception once again demonstrated why long-term ecosystems matter. Its evolving membership reflects the industry’s shift towards software-defined vehicles and increasingly physical AI-driven mobility use cases.
CES 2026 did not deliver a single defining breakthrough. Instead, it showed how previously separate developments are now converging. The edge is where intelligence becomes operational. Partnerships are the mechanism for execution. The next competitive advantage will not come from adding AI as a feature, but from running it reliably in the physical world.
This shift is already underway. CES simply made it harder to ignore.