Over the past few months, our team has been working on something that lives at the intersection of artificial intelligence, robotics, embedded systems, and real-world engineering: a fully autonomous delivery robot designed not just as a concept, but as a system that operates under real constraints. Click here to read the full paper
This project began as a graduation requirement, but it quickly became a research challenge. We weren’t simply asking whether a robot could move or navigate. We were asking whether it was possible to build an autonomous system that remains reliable, deterministic, and deployable when computation, power, and hardware resources are limited.
One of the first lessons we learned was that autonomy means very little without real-time stability. Early on, it became clear that motor control and safety could not compete with AI workloads for system resources. That realization shaped the entire architecture. High-level autonomy and perception were handled separately from low-level control, allowing time-critical tasks to remain deterministic even when the system was under load. This separation fundamentally changed how the robot behaved—it stopped feeling like a prototype and started acting like a real system.




Running AI on a mobile robot introduced a different set of challenges. Unlike desktop environments, the robot had to perform perception and decision-making on edge hardware, in real time, and without constant cloud access. This forced us to rethink what “good” AI performance really means in robotics. Consistency, latency, and graceful failure became more important than raw accuracy. The result was a perception pipeline that integrated tightly with navigation and control rather than operating in isolation.
Before any of this reached the real world, the robot lived in simulation. Simulation was not treated as a visualization tool, but as a validation environment where navigation logic, perception behavior, and failure cases could be explored safely. Each successful simulation test had to survive contact with reality, and each failure informed the next design iteration. This loop of testing, deployment, and refinement shaped the system more than any single algorithm choice.
As the system matured, observability and safety became central concerns. Autonomy does not mean removing humans from the loop entirely. Cloud monitoring, telemetry, and firmware-level failsafes were integrated to ensure that system behavior remained transparent and predictable. When something went wrong, the system needed to fail safely, not silently.
What emerged from this process was a unified autonomous platform that integrates mechanical design, embedded real-time control, AI-based perception, simulation-driven validation, and cloud monitoring into a single coherent system. This work has since been formalized and published as “A Unified AI, Embedded, Simulation, and Mechanical Design Approach to an Autonomous Delivery Robot” on arXiv.
More than a graduation project, this effort became an exploration of how real autonomous systems are built. Robotics is rarely about perfect models or isolated components. It is about system-level decisions, trade-offs, and making complex technologies work together in the real world.
This website is where we document that journey.