top of page

Spatial Spotlight: Stray Robots

"Spatial Spotlight" is a series sharing a Spatial8 ecosystem partner's story.


ZÜRICH, SWITZERLAND (January 12, 2022) – Stray Robots was built on the realization that tools to build 3D computer vision were lacking. Tools built for 2D computer vision can be hacked to work for 3D vision on a case-by-case basis, but the Stray Robots team thought it would be worth building something from the ground up

that is actually designed with and for 3D vision.


While the past decade has seen an explosion in computer vision technology, most of it is inherently 2D, detecting or classifying features on images. In robotics and Augmented Reality applications, 2D computer vision is only part of the story. You also need to know how far the objects are and which way they are oriented. With the proliferation of depth cameras in the past decade there are now increasingly more 3D capable devices, such as the iPhone Pro lineup, which now feature LiDAR sensors in addition to high resolution cameras and powerful processors. The software is really the limiting factor in enabling intelligent applications.


The Stray Studio desktop application.


Stray Robots was founded in 2021 by Kenneth Blomqvist and Julius Hietala who first met while studying computer science at Aalto University in 2012. Before starting Stray Robots, Kenneth was researching robotic manipulation at ETH Zurich. He has software engineering experience from working at companies like Wolt and Webflow. Previously, Julius was doing research on how to fold fabrics with robots using reinforcement learning. Prior to that, he worked on advertising technology at Smartly.


Stray Robots offers a full toolkit for building apps with 3D vision capabilities. The toolkit includes data collection, processing, annotation tools and a 3D computer vision software development kit (SDK).


An example of what Stray Studio can do.

To build applications with the Stray Toolkit, you first create detailed 3D reconstructions from a set of scans. You then annotate those scans in 3D with the information your application needs to estimate. Once done with annotating the scans, you can bake that knowledge into a machine learning computer vision model, which you can use in your app through the Stray Vision SDK. If you don’t want to use the SDK and prefer to develop your own algorithms, they won’t stand in your way and all the processed data is organized and there for you to play around with.


So far, the Stray Robots toolkit has been used by robotics companies to build pick and place applications and to detect objects in warehouses. In Augmented Reality it has been used for keypoint detection, measuring objects, and for object pose estimation.


Stray Robots is on a mission to structure the world’s visual data and teach robots and computers to see. If that sounds exciting to you, don’t hesitate to visit them at their website or reach out to them at hello@strayrobots.io.



Recent Posts

See All
bottom of page