At the closing panel of IoT Stars during Embedded World North America 2025, a group of Edge AI heavyweights gathered to separate the hype from the reality. Moderated by Ed Doran (Edge AI Foundation), the panel featured Zach Shelby (Edge Impulse/Qualcomm), Kate Stewart (Zephyr Project), and Jim Beneke (Tria Technologies).
The consensus was clear: the technology is ready, but the engineering mindset needs to shift from "perfecting models" to "perfecting data pipelines." Find below the essential lessons for engineers looking to survive the transition from Machine Learning R&D to revenue.

Panelists: Zach Shelby, Jim Beneke, Kate Stewart, Ed Doran (Moderator)
Zach Shelby identified data availability as the single biggest hurdle in Edge AI today. His advice to engineering teams is to stop trying to build the "perfect" model before deployment. The teams that succeed are the ones that deploy a "Minimum Viable Model" to the field within a week, not two years. By getting a basic model onto a device quickly, engineers can use model monitoring to capture real-world data, specifically when working on corner cases that never appear in a lab.
The lesson is to treat your device not just as an inference engine, but as a data collection tool that improves its own intelligence over time through a continuous feedback loop.
"Why do microcontrollers have to be treated differently than devices running embedded linux?I don’t care, it’s math!" - Zach Shelby
For years, the industry has drawn a sharp line between "TinyML" on microcontrollers and "Edge AI" on Linux gateways. The panel argued that this distinction is becoming irrelevant. As Zach Shelby bluntly put it, "I don't care, it's math." Whether you are running on a Cortex-M or a high-performance NPU capable of 1000 TOPS (Trillions of Operations Per Second), the underlying principles remain the same.
Engineers should stop getting hung up on hardware dogma and focus on the math required to solve the problem. With modern toolchains, the same workflow should scale from a battery-powered sensor to a high-end gateway, allowing the application requirements (not the silicon limitations) to dictate the architecture.
As AI models move into critical infrastructure, "black box" deployments are becoming a liability. Kate Stewart emphasized that when (not if) things go wrong, you need a basis to debug. This requires radical transparency regarding the lineage of your system: where the data came from, how it was trained, and the methodology used.
For IoT engineers, this means that Software Bill of Materials (SBOMs) and data provenance are no longer optional paperwork; they are the only way to defend against security vulnerabilities and diagnose why a model failed in the field. If you can’t trace the decision back to the data, you can’t fix the bug.
Jim Beneke highlighted the complexity of the modern Edge AI stack: silicon, sensors, models, and security. It's nearly impossible for one company to realize a full Edge AI solution. The "not invented here" syndrome is a killer in this space. Successful deployments rely on leveraging the ecosystem, whether that involves using open-source projects like Zephyr RTOS or partnering with distributors like Avnet to navigate the hardware supply chain. The takeaway for engineers is to focus on your unique intellectual property and lean on partners for the infrastructure; trying to build the full stack from scratch is a recipe for obsolescence before you even launch.
The panel concluded with a reality check by referencing the "Gartner Hype Cycle." Zach Shelby noted that Edge AI has passed the peak of inflated expectations and is now heading down the curve – which is actually good news. This is the phase where the tourists leave, the real engineering begins, and companies start making actual money. We are moving away from "science fair" demos of chatbots on Raspberry Pis and toward purpose-built, revenue-generating applications in industrial automation and predictive maintenance.
The era of Edge AI experimentation is ending; the era of Edge AI production has begun. Success in this next phase isn't about having the flashiest demo, but about building robust data loops, ensuring system transparency, and choosing the right partners. By shifting focus from model architecture to data quality and system observability, IoT engineers can turn the promise of Edge AI into the reality of profitable, scalable products.