
Annotating with Tempo Studio
Closing the feedback loop in Edge AI
Labs

20 June 2024
Bring ambitious, sensor-driven machine-learning products into production through integrated data collection and testing.
One of the biggest challenges we faced early on was the disconnect between data collection in the field and model development in the lab. Too often, AI is built in isolation—models are trained, deployed to edge devices, and then… that’s it. There’s little visibility into how they actually perform in the real world or whether they hold up in edge cases.
To solve this, we built an internal tool called Tempo Studio.
Tempo Studio is a dashboard and companion app we designed to give our entire team—machine learning engineers, data collection leads, QA testers—a shared source of truth. It gave us a clear view into the size, health, and structure of our datasets, all in one place. We could browse and play back time-series samples, crop segments, leave notes, and annotate with precision—essentially turning raw sensor data into usable training material.
The tool was developed alongside our work with Ford, where it helped us manage, clean, and operationalize over 100,000 audio, motion, and image samples. Tempo Studio also integrates directly with dev hardware, so teams in the field could record and upload new data directly into the platform—closing the gap between real-world testing and model iteration.
By giving everyone access to the same real-time dataset, Tempo Studio made it possible to move faster, stay aligned, and build edge AI models that are grounded in real-world performance.