Technological developments and advances
Smartphones, rear view cameras and automatic deposit machines, inspection systems in production plants and autonomous mobile robots: it is impossible to imagine our everyday life without cameras. The sensors are integrated perfectly into their applications, in line with the definition of embedded vision. Consumer electronics and the automotive industry have been important drivers for this development towards miniaturized cameras and increasingly powerful embedded processor boards. In this blog post, we want to take a look at the beginnings of embedded vision and the technical development over the past 30 years.
From first embedded systems to the invention of the Smart Camera
The Apollo Guidance Computer with a working memory of 2 KBytes and a clock rate of 100 Kilohertz can be seen as a precursor of today’s embedded systems. It was the first computer based on silicon integrated circuits and produced for the Apollo human spaceflight program. Introduced in 1966, it provided computation and interfaces for guidance, navigation, and control of the spacecraft.
However, it took another 30 years for the development of embedded systems for vision applications. Before that time, cameras connected to external processing units have been used, providing the necessary computing power to handle image data but with a tower-design size and far off from being embedded.
In 1995, Michael Engel presented the VC11, the world’s first industrial smart camera based on a digital signal processor (DSP). His goal was to create a compact system that allowed image capture and processing within the same device. The camera was offered at a significantly lower cost compared to PC-based systems and led the way for today’s smart cameras and embedded vision systems. This also marked the foundation of Vision Components by Michael Engel.
The 1990s: Homogeneous Embedded Vision Systems
During the following years, DSPs were standard for smart cameras’ image processing. The technology evolved from the first chips by companies such as Analog Devices to new models with gigahertz computing power by Texas Instruments in the first years of the 21st century, but the main principle remained: Image data was captured by an image sensor that was controlled by the DSP. The data was then transferred to the internal memory via a bus controller. All processing took place in one processing unit. With advanced DSPs, even applications such as 3D profile sensors with onboard data processing have been enabled.
Dual core ARM processors with FPGA:
Milestone for the evolution of embedded vision
A milestone for the evolution of embedded vision systems was marked by the introduction of dual core ARM processors with FPGA by Xilinx. Vision Components developed its first embedded vision systems based on this heterogeneous architecture in 2014. They combined the advantages of parallel real-time processing capabilities of the FPGA with a freely programmable Linux operating system on the ARM cores. This setup made the development of embedded vision systems more versatile and flexible and opened up new possibilities for developers to code and implement software for their specific applications.
Computing power increased with new processors that have been developed, driven by high demand for small and powerful electronics for consumer products, smart home and industry, as well as new applications in the automotive industry. As a result, embedded vision systems have been designed to fit any market requirement, from board level cameras with multiple-core processing power to extremely small embedded vision systems that combine image acquisition and processing with ARM- and FPGA-cores on a single board the size of a post stamp.
1996 to date: Vision Components milestones in the development of embedded vision
From the world's first intelligent camera for industrial applications, to the development and production of board-level and housed cameras, to the innovative FPGA booster VC PowerSoM: Vision Components has significantly shaped and driven the development of innovative embedded vision products.
State of the Art Technology: Smart, flexible and ultra-compact
State of the art technology are nowadays heterogeneous system approaches with multiple cores, that use ARM-CPUs and high-end FPGAs as well as specialized processing units such as DSPs, Graphics Processing Units (GPUs), Tensor Processing Units (TPUs) especially developed for machine learning and AI-optimized Neuronal Processing Units (NPUs). For the design and development of such systems, there is a choice of numerous processor boards and system-on-modules on which the corresponding processor units are already fully integrated. Different modules can be combined for even more flexibility. As a result, developers benefit from the greatest possible freedom in selecting the processor for their main application, from fast, cost-effective development and thus from a shorter time to series maturity.
Do you want to be part of the Embedded Vision success story??
Then read on to find out how you can best integrate Embedded Vision into your project. And let us know your questions!
We have compiled the three most efficient ways to integrate embedded vision in this continuing blog post. And look forward to helping you with your individual projects and challenges.