Neural Network from CEVA

CEVA has introduced its CDNN2 (CEVA Deep Neural Network), its second generation neural network software framework for machine learning.

Neural Network from CEVA

CDNN2 enables localised, deep learning-based video analytics on camera devices in real time. This significantly reduces data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy.

Coupled with the CEVA-XM4 intelligent vision processor, CDNN2 offers significant time-to-market and power advantages for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices.


CDNN2 builds on CEVA’s first generation neural network software framework (CDNN), which is already in design with multiple customers and partners.


It adds support for TensorFlow, Google’s software library for machine learning, as well as offering improved capabilities and performance for the most sophisticated and latest network topologies and layers.

CDNN2 also supports fully convolutional networks, thereby allowing any given network to work with any input resolution.

Using a set of enhanced APIs, CDNN2 improves the overall system performance, including direct offload from the CPU to the CEVA-XM4 for various neural network-related tasks.

There is a youTube video of CDNN2 in action here:

These enhancements, combined with the “push-button” capability that automatically converts pre-trained networks to run on the CEVA-XM4, underpin the significant time-to-market and power advantages that CDNN2 offers for developing embedded vision systems.

The end result is that CDNN2 generates a faster network model for the CEVA-XM4 imaging and vision DSP, consuming significantly lower power and memory bandwidth compared to CPU- and GPU-based systems.

“The enhancements we have introduced in our second generation Deep Neural Network framework are the result of extensive in-the-field experience with CEVA-XM4 customers and partners,” says CEVA’s Eran Briman, “they are developing and deploying deep learning systems utilizing CDNN for a broad range of end markets, including drones, ADAS and surveillance. In particular, the addition of support for networks generated by TensorFlow is a critical enhancement that ensures our customers can leverage Google’s powerful deep learning system for their next-generation AI devices.”

CDNN2 is intended to be used for object recognition, advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications.

The CDNN2 software library is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK) and computer vision library, CEVA-CV. It is flexible and modular, capable of supporting either complete CNN implementations or specific layers for a wide breadth of networks.

These networks include Alexnet, GoogLeNet, ResidualNet (ResNet), SegNet, VGG (VGG-19, VGG-16, VGG_S) and Network-in-network (NIN), among others. CDNN2 supports the most advanced neural network layers including convolution, deconvolution, pooling, fully connected, softmax, concatenation and upsample, as well as various inception models.

All network topologies are supported, including Multiple-Input-Multiple-Output, multiple layers per level, fully convolutional networks, in addition to linear networks (such as Alexnet).

A key component within the CDNN2 framework is the offline CEVA Network Generator, which converts a pre-trained neural network to an equivalent embedded-friendly network in fixed-point math at the push of a button. CDNN2 deliverables include a hardware-based development kit which allows developers to not only run their network in simulation, but also to run it on the CEVA development board in real-time.


Leave a Reply

Your email address will not be published. Required fields are marked *

*