Machine Learning at the Edge with Xilinx DNN Developer Kit
Programmable logic can accelerate machine learning inference. Let’s take a look at how we can use the Xilinx DNNDK to do this.
- Creator: Adam Taylor
- Project Name: Machine Learning at the Edge with Xilinx DNN Developer Kit
- Type of Project: Demonstrations (Projects showcasing individual features of a 96Boards product)
- Project Category: Machine Learning, Deep Neural Network
- Board(s) used: Ultra96
Vision-based machine learning inference is a hot topic, with implementations being used at the edge for a range of applications from vehicle detection to pose tracking and classification and identification of people, objects and animals.
Due to the complexity of convolutional neural networks, implementing machine learning inference can be computationally intensive. This makes achieving high frame rates challenging using traditional computational architectures. Heterogeneous System on Chips like the Zynq and Zynq MPSoC which combine high performance ARM processors with programmable logic offer solution which can significantly accelerate the performance.
The challenge has previously been creating the programmable logic implementation, which is easy to use and work with common machine learning flows e.g. Caffe and TensorFlow.