Example of OpenVINO(details) framework in a docker container

The use case is the hypercompression of video streams by a local extraction of meta data via Artificial Intelligence. Typical applications are remote surveillance, detection of prohibited gestures/attitudes, anonymisation of surveillance.

For a guided demo on Human Pose/OpenVINO and other AI demos, please contact us through the form Request Live Presentation
More information about OpenWrt secure hypervisor and SEC-Line OpCenter management console can be found on SEC-Line demo page.

Neural Network Acceleration

Network Training

The training phase of the neural network determines the weight coefficients of each neuron of the network by performing the necessary inference calculations. Only the inference deployment phase requires optimization on the actual embedded platforms being deployed onboard. The training phase of the neural network can be achieved offline on servers in the cloud using popular frameworks such as TensorFlow, Caffe, Pytorch.

Network Accelerators

Determining the type of neural network accelerator to be utilized in the embedded system is necessary as relying only on the CPU might saturate its capability and increase the power dissipation. This is demonstrated when using the ResNet-50 standard benchmark for comparing inference execution times and associated CPU workload.

Accelerators options

Inference Framework

This phase can be particularly complex. However, all Intel devices capable of calculating inferences are supported over a single inference framework toolkit named OpenVINO™ (Intel® Open Visual Inference and Neural network Optimization). With this inference framework, trained neural networks are converted and optimized depending on the target hardware, whether it is a CPU, a CPU with vector acceleration, a CPU with an integrated VPU or GPU, a CPU hosting a dedicated external accelerator, or a FPGA.


With a powerful and dedicated API supported by OpenVINO™, only a few lines of code of Python or C++ are necessary to acquire successive images from a video, scale them appropriately, do some early classic image processing (light, contrast, resizing…), run through the network, and finally rebuild a video stream annotated with detections performed by the neural network. One of the major benefits of OpenVINO™ is the execution of inferences on any Intel® platforms with minimum changes necessary to the application, with or without taking advantage of the different acceleration available on the hardware. Later, during the application design, designers can decide whether, or not, to include external hardware depending on the actual total workload and overall power dissipation.

White-paper coming soon ! Accelerating Neural Networks for Public Transportation

whitepaper AI


Embedded systems designers and developers must now run AI algorithms on the edge to reduce data streams going to the Internet and cloud, especially in mobile onboard applications where the wireless bandwidth, data throughput and data transit costs are key considerations. Furthermore, onboard AI-enabled systems need efficient inferences hardware with low power dissipation (<10W) and the ability to function reliably in temperature ranges of -40 °C/+70 °C while also being able to withstand continuous vibration.



whitepaper AI

Coming soon!
Accelerating Neural Networks for Public Transportation