Contact us

Let's talk about collaboration
and technology

Technology

Technology for hardware implementation of AI algorithms

Readily implementing a trained DL model on FPGA

Technology :
Technology for hardware implementation of AI algorithms

Helps quickly develop a fast and highly accurate edge AI system

Deep learning is a component of AI technology. The cost of and power consumed by deep learning can be reduced without compromising accuracy by performing it on dedicated hardware known as an accelerator. By incorporating it in an edge device, deep learning can be optimally adapted to a system for which a real-time response, independent of the communication environment, is critical. However, converting a deep learning algorithm to a set of hardware circuit data normally takes a long time, involving tuning complex settings without degrading accuracy.

To meet this challenge, Konica Minolta developed its proprietary technology for algorithm conversion to hardware circuit data quickly and accurately. This technology improves your development efficiency, enables your algorithm to be swiftly updated in response to feedback from field staff, and enables you to fast-forward your cycle of solving customer problems.

Technology Overview

Konica Minolta developed NNgen jointly with Shinya Takamaeda, Associate Professor at the University of Tokyo. NNgen is a high-level synthesis compiler intended to make it simple to implement a trained deep learning model on a field programmable gate array (FPGA) and is an open source program available to the public.

NNgen enables engineers and designers to efficiently develop a fast accelerator operating on FPGA from a trained deep learning model, even if they are not experts in hardware tuning. Moreover, they will be able to develop products and services that carry out AI processes in real time on the edge side where FPGA-equipped devices are located.

Furthermore, since NNgen is an open source program, anyone can use it free of charge or make contributions to its development.

High-level synthesis compiler NNgen for FPGA

Deep learning in particular is attracting much attention in the field of AI. It has improved the accuracy of image processing, speech recognition, and various other processes. Meanwhile, for computation, it requires an enormous amount of computer resources. Thus, the importance of energy-efficient yet high-performance, dedicated hardware has increased. One type of such dedicated hardware is FPGA, which is hardware with user configurable circuits. The circuit configuration of FPGA can be customized to meet specific processing needs. This feature can be used to exploit a circuit configuration specially tailored to a trained deep learning model. This makes it possible to develop a small and fast accelerator in an advantageous way. NNgen is a domain specific, extendable high-level synthesis compiler designed to quickly and efficiently implement, on FPGA, a fast dedicated accelerator capable of processing an already built and trained deep learning model for a specific use. This compiler generates a hardware description (Verilog HDL) and IP core setting file (IP-XACT) for a model-specific hardware accelerator. * NNgen is made public on GitHub.

Major features of NNgen

1. A high degree of abstraction for quick implementation

• Accelerator circuit description automatically generated from a Python-based model structure description similar to general deep learning frameworks
• Compatible with ONNX inputs
• Performance optimization simply by setting hardware parameters

2. Extendable compiler functions at different grains

• Multi-paradigm high-level synthesis framework Veriloggen, developed by Associate Professor Shinya Takamaeda, used as a compiler backend to enable custom layer additions and other extensions

3. Portability to support various FPGA environments

• Direct generation of Verilog HDL from a trained deep learning model

 

* Company and product names that appear on this page are registered trademarks or trademarks of relevant companies.

Inquiries about this technology

Contact us

Share