Training and Implementing a BNN Using Pynq
Using Pynq, we can implement an accelerated AI/ML on an FPGA without writing a line of HDL! Let’s take a look at how we can this…
- Creator: Adam Taylor
- Project Name: Training and Implementing a BNN Using Pynq
- Type of Project: Demonstrations (Projects showcasing individual features of a 96Boards product)
- Project Category: Neural Networks, Machine Learning, Python
- Board(s) used: Ultra96
Machine Learning (ML) is a hot topic, finding many use cases and applications. Heterogeneous SoC like the Zynq and Zynq MPSoC provide a significant advantage as they allow the inference network to be implemented within programmable logic.
Implementing the inference network in the PL provides a significantly increase in performance. Of course for those unfamiliar with the Machine Learning it can be difficult to know where to start, especially if you want to accelerate performance with using programmable logic.
This where the Pynq Framework comes in and allows us to work with higher level languages such as Python while accessing programmable logic overlays to perform the ML acceleration.
For this project we are going use Quantized / Binary Neural Network overlays available for the Pynq Z2, Z1 and Ultra96.
In this project we are going to install the BNN overlays and run through one of the examples to demonstrate correct functionality.
The main thrust of this project however will be the training and application of new parameters. We can then apply these new parameters to the overlay.
For this project we will be using Pynq on the Arty Z7 / Pynq Z1.