Hackster.IO’s Adaptive Computing Developer Contest with Xilinx
This week Sundance entered into an exciting challenge hosted by Hackster.IO in association with Xilinx.
Adaptive computing is behind the innovations changing the world as we know it today across applications that span from the home, roads, hospitals and to the skies.
Xilinx and Hackster.io challenged developers to combine the power of Xilinx adaptive computing platforms with the Vitis development environment and Vitis AI to solve real-world problems of today.
There were 3 categories in total. Sundance entered a team into category 1 which was Adaptive Intelligence of Things and the Sundance AI team entered category 2 which was Intelligent Video Analytics.
CATEGORY 1: Adaptive Intelligence of Things
The age of IoT 2.0 is upon us. The “Internet of Things” has grown up and to become the “Intelligence of Things”. Objects all around us are not only connected but using data collected from sensors to make decisions and take action to improve our quality of life and drive efficiency.
Use an Avnet Ultra96-V2 board and the Vitis / Vitis AI software platform to build any “Intelligence of Things” application to show us how you’d leverage the power of adaptive computing in the age of IoT 2.0.
We’re looking for the best use of hardware acceleration with programmable logic!
- Use an Avnet Ultra96-V2 Board
- Use Vitis / Vitis AI to build an Adaptive Intelligence of Things application
- Run at least one machine learning model
Category 1 submission – Team Sundance: Accelerated Visual Sense using RealSense Lidar
Creating spacially aware AI using a RealSense Camera and the Ultra96v2.
At Sundance we are currently looking into spacial AI using the RealSense platform from intel on our in house board, the Sundance VCS system. We are using this technology in the filming industry to increase the autonomous capabilities of the Agito robot from Motion Impossible.
Currently, AI-based image processing is usually performed on a full-frame. This means that everything in the field of view is processed; up to a set distance. Now, this is great, but can sometimes lead to unwanted predictions that need to be ironed out later on in code, which can be inefficient when creating streamline Robotic platforms.
An issue we faced with our standard camera technology was that it tended to pick up hazards in the distance before they needed detecting. This resulted in unnecessary post-processing which caused the system to become sluggish and draw more power.
A greater sense of spatial awareness is needed and so we worked on face tracking that was limited to faces within 1M of the camera. For the full story and our steps taken towards this solution including download links to the code used see our submission here.
Spacial awareness face tracking – detection limited to <1M from the camera
CATEGORY 2: Intelligent Video Analytics
We are in the age of cameras! An abundance of video data is created every millisecond in the world. All that raw video data is here to help a future with safer cities, better infrastructure, better medical diagnoses and so much more thanks to intelligent video analytics.
Xilinx Zynq UltraScale+ MPSoC-EV devices, with a built-in video codec unit (VCU), provide the real-time performance that is critical in so many video analytics applications.
We want to see what kind of intelligent video analytics solutions you can create with the Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit combined with Vitis / Vitis AI.
- Use a Xilinx Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit
- Use Vitis / Vitis AI to build an intelligent video analytics solution
- Utilize both the video codec unit (VCU) and the deep learning processing unit (DPU) inside the Zynq UltraScale+ MPSoC
- Run at least one machine learning model
Category 2 submission – Team Sundance AI: Fruit Detection Using MPSoCs
Fruit detection at the edge using a ZCU104.
The key goal of this project was to enable fruit detection on the edge using FPGA technology. After some consideration, we opted to use a Xilinx MPSoC device, specifically the ZCU104 FPGA development board.
Accelerating YoloV3 models to run on Xilinx Vitis-AI platform provides a fast and power-efficient AI classification method, which can be used in various fields. Ranging from agricultural to industrial use. In this regard, our system is fulfilling all aspects of Adaptive Compute Acceleration, providing high-performance inference at the edge. It has low power requirements and enables rapid changes in workflow by providing the ability to easily train and deploy a new improved model.
Our first step was obtaining the datasets of oranges and apples and annotating them appropriately. We then used these high-quality datasets to train the YoloV3 CNN on a GPU (Cuda architecture). This process was iteratively repeated, incrementally improving the performance of the CNN.
Once we had a successful CNN running on a GPU, our next task was to move it onto the Vitis-AI tool to accelerate our model to run on the ZCU104 and concluded our work with successful system testing.
Overall, the system performed as expected. Providing adequate inference speed on the edge, whilst being very power efficient. This makes our system a viable candidate for future use in the field of agriculture.
Further work is still needed, however. You can see from the image that the bounding boxes are sometimes offset from their target. Which is not the case with Xilinx Vitis-AI provided demo models. We have yet to find a reason for this artefact. We also plan on optimising the ZCU104 implementation using the Vitis AI optimiser to prune the network without significantly degrading performance and to thoroughly test the network on real-world farming applications.
There is a very detailed submission for this, to see it in full, please click here.