ROS2 Publisher for Oak-D-Lite

The Oak-D Lite is a Spatial AI Camera by Luxonis used for stereo visual perception. It is a system comprised of two cameras, spaced apart, providing stereoscopic vision capabilities. Offering onboard neural network processing, together with a low introductory price, makes it an interesting choice for various applications in the field.

This OAK camera uses USB-C cable for communication and power. It supports both USB2 and USB3 (5gbps / 10gbps).

Specifications for the Oak-D Lite camera:

Ivica Matić

Camera Specs Color camera Stereo pair
Sensor IMX214 OV7251
DFOV / HFOV / VFOV 81° / 69° / 54° 86° / 73° / 58°
Resolution 13MP (4208×3120) 480P (640×480)
Focus AF: 8cm – ∞ OR FF: 50cm – ∞ Fixed-Focus 6.5cm – ∞
Max Framerate 60 FPS 200 FPS
F-number 2.2 ± 5% 2.2
Lens size 1/3.1 inch 1/7.5 inch
Effective Focal Length 3.37mm 1.3mm
Distortion < 1% < 1.5%
Pixel size 1.12µm x 1.12µm 3µm x 3µm

ROS2 Publisher

To make integration of OAK-D Lite easier with already existing and newly started projects, we decided to write our own ROS2 Publisher which will publish the necessary data on appropriate ROS Topics. The solution works by expanding on depthai demo program which can be found at, creating custom callbacks for needed functionalities and publishing the data on ROS topics.

The topics published are:

  • /oakd/rgb, ROS2 Image Format
  • /okad/spatial. ROS2 Float32MultiArray Format
  • /okad/twodimensional. ROS2 Float32MultiArray Format

Because we have made interfacing the Oak-D Lite camera easier with our ROS2 publishing script, we can utilise it in the system to further accelerate our computer vision object detection systems by splitting the workload between multiple detection devices. And the beauty of using ROS2 for connecting the various system components is that we can use any host device fit for our purpose.

For example, we can further accelerate our system by feeding the host device from detection data straight from the OakD publisher, together with the yolov5 detector running a detection on our publisher’s RGB data.

In that case, the diagram above would look something like this:


You can find the code for the ‘Yolo Object Detection Inference’ box from the above diagram in the python file below.

Right click here and ‘save as’ to save a copy of the code.


Contact us for more information