What is Embedded Vision?

The Evolution of Embedded Vision, State of the Art Technology and latest Trends for industrial Vision Systems

Embedded Vision is booming: Industrial automation and robotics, medical and laboratory devices, smart city and traffic applications benefit from camera data and the insights they provide. But how did all this evolve, from the beginnings in the 1990s until today? Read on and learn how technological advancements made possible to shrink vision systems from PC-tower size to compact and smart units and how modern embedded processors and camera modules for industrial use open up new possibilities for the design of perfectly integrated embedded electronics.

The Definition of an Embedded Vision System

Embedded systems usually refer to computer systems that combine a processor and memory as well as input/output peripheral devices, that have a dedicated function and that are implemented into larger mechanical or electronic systems (according to Wikipedia). 

When it comes to embedded vision systems, these solutions are usually based on an image sensor for acquisition of the data and a processing unit, for example a microcontroller. As cameras are being embedded as sensors in a growing number of applications, the label embedded vision is often used. A miniature board level camera module that can be deployed into various applications is often also referred to as embedded vision, as are complex systems incorporating a camera module, a processing unit with mainboard and system on module and further electronics such as interface-boards, digital I/Os etc. Further components such as high-speed trigger, lighting and optics, other sensors and software can also be part of an embedded vision system.


Therefore, low power consumption is also an important aspect of embedded vision systems. In addition, rugged operating ranges and industrial-grade quality as well as long-term availability are also firmly linked to the term embedded electronics. The technology can be utilized and found in various applications from consumer end products to highly specialized industrial uses.

“For industrial and mass production uses, an important characteristic of embedded vision are components perfectly tailored to the respective applications. They dispense with unneeded overhead in terms of components and functionality to cater for low per-unit cost. These solutions are as small as possible and optimally suited for edge devices and mobile applications.”

Jan Erik-Schmitt
Vice President of sales at Vision Components

Embedded Vision Systems – main criteria:

  • Small size
  • Low power consumption
  • Low per-unit cost
  • Rugged operation ranges
Diagramm: Kriterien für Embedded Vision System

From first embedded systems to the invention of the Smart Camera

The Apollo Guidance Computer with a working memory of 2 KBytes and a clock rate of 100 Kilohertz can be seen as a precursor of today’s embedded systems. It was the first computer based on silicon integrated circuits and produced for the Apollo human spaceflight program. Introduced in 1966, it provided computation and interfaces for guidance, navigation, and control of the spacecraft. However, it took another 30 years for the development of embedded systems for vision applications. Before that time, cameras connected to external processing units have been used, providing the necessary computing power to handle image data but with a tower-design size and far off from being embedded.

“In 1995, Michael Engel presented the VC11, the world’s first industrial smart camera based on a digital signal processor (DSP). His goal was to create a compact system that allowed image capture and processing within the same device. The camera was offered at a significantly lower cost compared to PC-based systems and led the way for today’s smart cameras and embedded vision systems. This also marked the foundation of Vision Components by Michael Engel”

The 1990s: Homogeneous Embedded Vision Systems

During the following years, DSPs were standard for smart cameras’ image processing. The technology evolved from the first chips by companies such as Analog Devices to new models with gigahertz computing power by Texas Instruments in the first years of the 21st century, but the main principle remained: Image data was captured by an image sensor that was controlled by the DSP. The data was then transferred to the internal memory via a bus controller. All processing took place in one processing unit. With advanced DSPs, even applications such as 3D profile sensors with onboard data processing have been enabled.

Dual core ARM processors with FPGA:
Milestone for the evolution of embedded vision

A milestone for the evolution of embedded vision systems was marked by the introduction of dual core ARM processors with FPGA by Xilinx. Vision Components developed its first embedded vision systems based on this heterogeneous architecture in 2014. They combined the advantages of parallel real-time processing capabilities of the FPGA with a freely programmable Linux operating system on the ARM cores. This setup made the development of embedded vision systems more versatile and flexible and opened up new possibilities for developers to code and implement software for their specific applications.

Computing power increased with new processors that have been developed, driven by high demand for small and powerful electronics for consumer products, smart home and industry, as well as new applications in the automotive industry. As a result, embedded vision systems have been designed to fit any market requirement, from board level cameras with multiple-core processing power to extremely small embedded vision systems that combine image acquisition and processing with ARM- and FPGA-cores on a single board the size of a post stamp.

State of the Art Technology:
Smart, flexible and ultra-compact

State of the art technology are nowadays heterogeneous system approaches with multiple cores, that use ARM-CPUs and high-end FPGAs as well as specialized processing units such as DSPs, Graphics Processing Units (GPUs), Tensor Processing Units (TPUs) especially developed for machine learning and AI-optimized Neuronal Processing Units (NPUs). In the design of these systems, processing units can be deployed using respective system on modules that already contain all processors. It is also possible to combine different modules and thus benefit from a greater freedom to select the processor for the main application.

VC PowerSoM

Sounds good?

Then read on how to best integrate Embedded Vision into your project and write to us!

We have summed up the three best ways to integrate Embedded Vision into your application in this blog post. And we’re happy to support you according to your individual requirements.

More vision stories

What is Embedded Vision?

Embedded vision for medical technology and laboratory equipment

25 Years of Pioneering: 
VC company founder Michael Engel interviewed